Thank you very much for the advices. Could one of you, or any
other EPICS experts
1. is it true that all the "process" functions of all the
records within one IOC share ONE
thread? So we should avoid having within one IOC multiple
records which have
extensive calculations (including memory copying) in
their "process" functions?
2. the CA server uses another thread(s) with lower priority for
broadcasting to "camonitor"s
3. does the CA server have its own buffers for each
field/broadcasts? Or it shares the same
Here the answer seems to be "yes" for scaler fields,
BUT "no" for string or array fields:
The function db_post_events
is written so that
record support will never be blocked attempting to post an
event because a slow client is not able to process events fast
enough. Each call to db_post_events
causes the
current value, alarm status, and time stamp for the field to
be copied into a ring buffer. The thread calling db_post_events
will
not be delayed by any network or memory allocation overhead. A
lower priority thread in the server is responsible for
transferring the events in the event queue to the channel
access clients that may be monitoring the process variable.
Currently, when an event is posted for a DBF_STRING field or
a field containing array data the value is NOT saved in the
ring buffer and the client will receive whatever value happens
to be in the field when the lower priority thread transfers
the event to the client. This behavior may be improved in the
future.
does this mean that there is NO guarantee that
string/array fields will have the correct
time stamps?
4. what happens when the CA server replies to "caget"? Do we
also have this "scalers are
buffered, but string/arrays are not" issue? Or "nothing
is buffered, but takes whatever
is available (old data and new time stamp, or new data
and old time stamp etc) at that
particular moment?
5. for completeness, a real dumb question: if I don't, or
haven't have time yet to, call "db_post_event",
the "camonitor"s will
certainly not get anything, but can a "caget" still get the
current value?
Sorry for all these dumb
questions. More and more we are using EPICS in a daq
fashion.
From the above document, I see a great danger in correctly
assigning time stamps to
string/array data. Without patching EPICS base, it seems
there is no lock/mutex will help
to close this hole. Hopefully it is me who missed something.
(Or, we will have to embed the time stamp within/after the
data?)
Please advice.
Thank you again, best regards,
Dehong
From: Till
Straumann <[email protected]>
Sent: Friday, April 29, 2016 11:54 PM
To: Mooney, Tim M.; Zhang, Dehong;
[email protected]
Subject: Re: Lock/Mutex to prevent "caget" from
cutting in between updating multiple fields
I would be a little bit more
conservative and stress that this kind of thinking is very
dangerous.
Your scenario might work on a real-time system with properly
assigned priorities
"most of the time".
However, if you have e.g., an unprivileged IOC running under
linux then it will use
a time-sharing scheduler where every thread essentially has
the same priority and
can be preempted when it's time-slice is consumed. Under
such a scenario it can
happen that record processing is pre-empted and the IOC side
of caget
could interfere.
Even under a RTOS your assumption is dangerous: assume that
the caget thread
indeed has a lower priority than the record-processing one.
It is still possible
for the effective priority of caget to be raised above
record-processing e.g., as a
result of caget holding a priority-inheriting mutex on which
a high-priority thread
is waiting and then caget could preempt record-processing.
Properly written code should IMHO use either lockfree
algorithms or synchronization
devices and never rely on assumptions about scheduling for
synchronization.
The good news are that EPICS is properly written code in
that sense and caget
respects the database locking. So it is ensured that you
never get a field value
with an incorrect timestamp since both are read 'atomically'
by CA.
OTOH, you cannot read multiple fields atomically with caget
(AFAIK). I.e.,
if your record has fields 'A' and 'B' then you cannot be
sure caget(A); caget(B)
yields a consistent set. One could be new, one old.
There are work-arounds:
- you can use an 'array' field. All elements of an array
are accessed by CÁ atomically.
E.g., when you caget a waveform record then you always
get a consistent waveform
(with a consistent timestamp), never a half-old, half-new
one.
- you could introduce locking semantics yourself. E.g., a
'LOCK' field. Record
processing may not modify the essential fields while this
is set. The user then
has to caput(LOCK, 1); caget(FIELD1); caget(FIELD2), ...;
caput(LOCK, 0);
but you have to rely on the user observing the semantics.
-- Till
On 04/29/2016 09:56 PM, Mooney, Tim M. wrote:
Hi Dehong,
I don't think the IOC side of a caget is ever going
to get any CPU cycles while your record is processing
unless you voluntarily give up the processor - for
example, by calling asynchronous device support, or by
waiting for something (which record support isn't
supposed to do anyway). If all the fields that must be
atomic are in the same record instance, I think you're
ok.
Hi,
In my custom record, I need to update multiple
fields atomically so they stay
synchronized -- all updated, or none updated; not
a few updated, but a few with the
old data. If I use my own mutex, I suspect that
"caget" can cut in between the
updating and gets, for example, new data with old
time stamp, or old data with
new time stamp depending on the order of
updating.
And I guess "camonitor" would not have this
problem because it would be the server
pushing the whole set (field itself and time
stamp for instance). Is this right?
So to close this hole with "caget", which global
lock should I use? dbScanLock?
Thank you, best regards,
Dehong