On 1/6/23 09:24, Andrew Johnson via Tech-talk wrote:
$ ./modules/libcom/test/O.darwin-aarch64/epicsThreadPerform
...
*The epicsThreadSleepQuantum() call returns 0.010000 sec.**
**This doesnt match the quantum estimate of 0.009273 sec within 1%.*
It takes 0.011298 micro sec to call epicsThreadGetIdSelf ()
epicsThreadPrivateGet() takes 0.005903 microseconds
In EPICS Base-3.15 the path to use is ./src/libCom/test/O.linux-x86_64/epicsThreadPerform but the output is identical. I don't particularly like the "doesn't match the quantum estimate ... within n%" language in the output, but those of you with a statistical background might recognize an attempt to be mathematically correct.
Thus on a Mac the value returned by epicsThreadSleepQuantum() is reasonably accurate, while on a fairly recent and fast Linux box the "minimum slumber interval" may be up to about 2 orders of magnitude smaller.
fyi. on my laptop (intel core i5) with a debian stable kernel (CONFIG_NO_HZ_IDLE)
$ ./modules/libcom/test/O.linux-x86_64/epicsThreadPerform ...
The epicsThreadSleepQuantum() call returns 0.010000 sec.
This doesnt match the quantum estimate of 0.000165 sec within 10%.
epicsThreadSleepQuantum() is indeed off by two orders of magnitude.