To answer your original question...
Yes, this problem is known and documented since ~2.5 years, but
not fixed yet [1].
Even though people have been running into this issue every now and
then (there was even support added to devIOCstats [2] to measure
the error), no-one's pain was bad enough to invest the time to
track it down and fix it.
For >99% of the applications, it just does not matter if the
scan period has an error in the order of a few promille.
~Ralph
[1] https://bugs.launchpad.net/epics-base/+bug/597054
[2]
http://www.slac.stanford.edu/grp/cd/soft/epics/site/devIocStats/
On 08.01.2013 16:16, inari badillo wrote:
Thank you for your replies,
The magnitude of the error is about 700 microsecond
(average), no matter the SCAN value is. So, it takes about 24min
to have a 1s error.
Answering Michael Davidsaver´s question, I mean precessing
relative to the start time. The different does not increase
every cycle. Sorry if it wasn´t clear.
And finally yes, it seems reasonable to be caused by
processing time. But in this case, wouldn´t it lead to
malfunction?
Thanks again,
Inari Badillo
2013/1/4 Maren Purves <[email protected]>
If there's a genuine drift I'd expect that to be something
like
the SCAN value + the processing time.
Maren
Michael Davidsaver wrote:
I mean, the difference between two
consecutive samples is always higher than the SCAN
value, accumulating an
error.
Just to make certain I understand you. Do you mean
that the difference
increases with each iteration? Or that the scan start
time is moving
(precessing) relative to start of the system clock's
second?
In case you have not seen it, the code in question is
periodicTask().
http://bazaar.launchpad.net/~epics-core/epics-base/3.15/view/head:/src/ioc/db/dbScan.c#L561
On 1/3/2013 4:36 PM, Mark Rivers wrote:
What is the magnitude of the error? How long does
it take before the
error is 1 second, for example?
-----Original Message-----
From: [email protected]
[mailto:[email protected]]
On Behalf Of
[email protected]
Sent: Thursday, January 03, 2013 3:09 PM
To: [email protected]
Subject: time drift in camonitor timestamps
Hi,
We have realized that when monitoring periodic PVs,
provided by soft
IOCs in
standard machines under Windows/Linux/MacOS, it is
observed a drift in
the
timestamps (server time) . I mean, the difference
between two
consecutive samples is always higher than the SCAN
value, accumulating an
error.
It seems that this issue does not occur in RT
systems such as VxWorks
(as
expected).
We are wondering if this is a known issue. In our
opinion, this fact
could
lead to
malfunction problems, especially if it is not taken
into account.
Does anyone in the list experienced this problem?
thank you
|