Andrew.
I did some more studying of the code and it seems to me that
everything is just fine the way it is.
Your analysis (or recollection)
that timestamps and data (when posted to an event queue) might
be out of sync seems incorrect to me (but I might have missed yet something
else, of course):
In db_post_single_event_private() we find
(note that event->valque == (no_elements == 1 && field_size <
sizeof(union native_size))
i.e., valque is FALSE for arrays && strings)
/*
* if we have an event on the queue and we are
* not saving the current value (because this is a
* string or an array) then ignore duplicate
* events (saving them without the current valuye
* serves no purpose)
*/
if (!event->valque && event->npend>0u) {
UNLOCKEVQUE(ev_que)
return;
}
This means that arrays are not queued multiple times.
If an array is in the queue already then no new entry is posted.
Further down then (in db_post_single_event_private()) we have
if (pLog && event->valque) {
struct dbCommon *precord = event->paddr->precord;
pLog->stat = precord->stat;
pLog->sevr = precord->sevr;
pLog->time = precord->time;
/*
* use memcpy to avoid a bus error on
* union copy of char in the db at an odd
* address
*/
memcpy( (char *)&pLog->field,
(char *)event->paddr->pfield,
event->paddr->field_size);
event->pLastLog = pLog;
}
This means that timestamp AND value are ONLY stored for non-array types!
For arrays (valque == FALSE) the condition is false and nothing is
stored in event->pLastLog.
Finally, in 'event_read' (called by event_task) the 'user_sub' callback
is executed (the CAS uses this for its business). We read
if ( event->valque ) {
pfl = &fl;
}
else {
pfl = NULL;
}
and a little bit further down (still in event_read):
UNLOCKEVQUE ( ev_que )
( *user_sub ) ( event->user_arg, event->paddr,
ev_que->evque[ev_que->getix] != EVENTQEMPTY, pfl );
LOCKEVQUE ( ev_que )
and thus the last argument (pfl) to the user_sub is NULL for array types.
This 'pfl' is eventually used by the server (camessage.c:read_reply())
and passed on to db_get_field_and_count(). If it is NULL there is nothing
to read a timestamp/status from and
db_get_field_and_count->dbGetField->getOptions
will use the timestamp/status from the record ***at the same time data
is copied***
In summary (epics 3.14.12.0 is what I looked at)
caget => always returns consistent status, timestamp + data (for
scalars and arrays)
camonitor => always receives consistent status, timestamp + data (for
scalars and arrays)
- Till
PS: I actually verified this by the following test (I'll be happy to
share the code
if someone is interested).,
- created devSup for a waveform (assume TSE == -2 in database, NELMS
== 256, FTVL == LONG, SCAN == "1 second")
read_waveform()
stores timestamp and fills waveform with new values;
print value and timestamp to console (consistency guaranteed by
dbScanLock)
- hacked event_task so that it would unblock SIGUSR1
- created a signal handler which suspends the calling thread for 5 seconds
- created a routine which sends SIGUSR1 to 'CAS-event'. Callable from
iocsh
- waveform is scanned at "1 second".
I then 'camonitor' the waveform and while the monitor listens I sent SIGUSR1
to the event task. It executes the signal handler and sleeps. After it wakes
up, the record had processed already a few times.
Yet: camonitor does get the last data AND timestamp:
IOC console:
WF value 9 at time 18:56:00.890514943
WF value 10 at time 18:56:01.890625818
WF value 11 at time 18:56:02.890810892
WF value 12 at time 18:56:03.890996364
WF value 13 at time 18:56:04.891085319
WF value 14 at time 18:56:05.891173553
WF value 15 at time 18:56:06.891364384
WF value 16 at time 18:56:07.891471736
Sending USR1 to 7ff74b293700
Signal in 7ff74b293700
WF value 17 at time 18:56:08.891656206
WF value 18 at time 18:56:09.891753557
WF value 19 at time 18:56:10.891834469
WF value 20 at time 18:56:11.892025176
WF value 21 at time 18:56:12.892207204
WF value 22 at time 18:56:13.892394550
WF value 23 at time 18:56:14.892491579
As you can see the record processes every second
The camonitor output is:
WF 2016-05-03 18:56:02.890811 256 11 11 11 11 11 11 11 11 11 11 11 11 11
11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
11 11 11
WF 2016-05-03 18:56:03.890996 256 12 12 12 12 12 12 12 12 12 12 12 12 12
12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
12 12 12
WF 2016-05-03 18:56:04.891085 256 13 13 13 13 13 13 13 13 13 13 13 13 13
13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
13 13 13
WF 2016-05-03 18:56:05.891174 256 14 14 14 14 14 14 14 14 14 14 14 14 14
14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
14 14 14
WF 2016-05-03 18:56:06.891364 256 15 15 15 15 15 15 15 15 15 15 15 15 15
15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15
15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15
15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15
15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15
15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15
15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15
15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15
15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15
15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15
15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15
15 15 15
WF 2016-05-03 18:56:07.891472 256 16 16 16 16 16 16 16 16 16 16 16 16 16
16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
16 16 16
**** event_task *** is sleeping here
WF 2016-05-03 18:56:12.892207 256 21 21 21 21 21 21 21 21 21 21 21 21 21
21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21
21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21
21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21
21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21
21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21
21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21
21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21
21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21
21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21
21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 21
21 21 21
WF 2016-05-03 18:56:13.892395 256 22 22 22 22 22 22 22 22 22 22 22 22 22
22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22
22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22
22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22
22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22
22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22
22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22
22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22
22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22
22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22
22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22
22 22 22
As we see, when the event task wakes up it gets the LAST data (value 21)
with timestamp 18:56:12.892207
which matches record processing EXACTLY!
EPICS was already doing the right thing :-)
On 05/03/2016 12:15 PM, Andrew Johnson wrote:
Hi Dehong,
On 05/03/2016 12:31 PM, Zhang, Dehong wrote:
From the name, dbScanLock() seems to be for updating (writing) requests from outside
the record. Of course the record support itself can also call it, when the record wants to
update itself, to block updating requests from outside. Is it easy to let the CA server
also call it when the CA server wants to reply to "caget" or broadcast to the "camonitor"
list.
You are mistaken there, dbScanLock() is always called before *anything*
accesses a record field, for either reading or writing data. For CA
requests the path from the RSRV server is always via
db_get_field() => dbGetField() => dbScanLock()
no matter whether the request is directly from a CA Get request or from
the thread at the low-priority end of an event queue that is sending out
a monitor update. A similar path applies to CA Put requests, and for
record scanning dbScanLock() is called before entering the record's
process() routine.
The reason why for arrays and string values the alarm+timestamp data may
come from a different update than the value data is that the
alarm+timestamp are saved into the event queues by the call to
db_post_events() inside the record's process() routine, but the objects
we put onto those queues are too small to hold more than 8 bytes of
value data. Every CA client gets an event queue with storage for at
least 128 of these objects, so we don't want to make them too large.
For the larger data types the queue instead stores a pointer to the
original data, and when the objects are read out of the queues by the
low-priority threads that forward them to each client they call
db_get_field() a second time (the first time was on entry to the queue)
which again locks the record before extracting just the value data from
the stored pointer. The alarm+timestamp from the event queue are
provided in the db_field_log object pflin that is also passed to
db_get_field(). However while the update was waiting on the event queue
the record was unlocked, so it is possible for another thread to lock
the record and cause it to process first, thus changing the data in the
array/string being monitored.
Note also that if you have 2 different CA clients monitoring the same
PV, it is possible for a single db_post_events() call to result in the 2
clients seeing different data for the same alarm+timestamp, since each
client has its own queue and and its own low-priority thread that reads
out the event queue and forwards the data to the appropriate TCP socket.
The low priority threads are not synchronized, they each have to call
db_get_field() independently and at different times (in order to get
access to the record data they must take the lock, which is a mutex so
only one of them can "own" the record at once even if they're only
reading from it).
It is actually relatively easy to increase the size of the db_field_log
object so it can store a 40 character DBF_STRING value, but making it
larger than that would have no effect other than wasting memory. IOC
memory sizes have grown since the event queue code was written, and we
should now be able to afford the extra RAM needed by the IOC (32 bytes
extra * 128 items = 4KB minimum per CA client).
HTH,
- Andrew
- Replies:
- Re: Lock/Mutex to prevent "caget" from cutting in between updating multiple fields Bruce Hill
- References:
- Lock/Mutex to prevent "caget" from cutting in between updating multiple fields Zhang, Dehong
- RE: Lock/Mutex to prevent "caget" from cutting in between updating multiple fields Mooney, Tim M.
- Re: Lock/Mutex to prevent "caget" from cutting in between updating multiple fields Till Straumann
- Re: Lock/Mutex to prevent "caget" from cutting in between updating multiple fields Zhang, Dehong
- Re: Lock/Mutex to prevent "caget" from cutting in between updating multiple fields Till Straumann
- RE: Lock/Mutex to prevent "caget" from cutting in between updating multiple fields Kim, Kukhee
- Re: Lock/Mutex to prevent "caget" from cutting in between updating multiple fields Zhang, Dehong
- Re: Lock/Mutex to prevent "caget" from cutting in between updating multiple fields Andrew Johnson
- Navigate by Date:
- Prev:
Re: EPICS and CANbus? Andrew Johnson
- Next:
Re: Lock/Mutex to prevent "caget" from cutting in between updating multiple fields Bruce Hill
- Index:
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
<2016>
2017
2018
2019
2020
2021
2022
2023
2024
- Navigate by Thread:
- Prev:
Re: Lock/Mutex to prevent "caget" from cutting in between updating multiple fields Andrew Johnson
- Next:
Re: Lock/Mutex to prevent "caget" from cutting in between updating multiple fields Bruce Hill
- Index:
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
<2016>
2017
2018
2019
2020
2021
2022
2023
2024
|