Experimental Physics and
| |||||||||||||||||
|
Hello 김대영
epicsEventWaitWithTimeout() is executed on your computer like any other code. The number of machine instructions it takes to complete is never fully deterministic on a multi-threaded, multi-core computer. Even if we assume (for the sake of the argument) that you have hardware timers to generate timeouts there have to be many CPU instructions involved to arm the timer, handle the timer interrupt which signals the end of the timeout and to read out the clock (epicsTimeGetCurrent()) you use to measure the actually elapsed time. The time it takes a CPU to perform these operations is subject to pseudo-random delays: - the scheduler may decide that there is higher-priority work and put your task to sleep for a while. If that happens, e.g., between the first epicsTimeGetCurrent() and epicsEventWaitWithTimeout() then the effective time that has expired is longer than what was requested. You'll notice that while the reported result is never exactly what you wanted but it is *always* more than what you had asked for. These delays also can happen in the routines themselves or even in kernel services they rely on. - the thread that executes the code may have to wait for a resource that is currently in use by another thread and therefore be delayed. - interrupts and interrupt-handlers introduce delays very similar to what was already mentioned. - there are also hardware-induced delays such as cache-misses, limited memory bandwidth and other causes (e.g., on intel: system-management mode). There are operating systems explicitly designed ('hard real-time OS') to minimize the delays introduced by software and thus guarantee a maximal *latency* or delay that the user may expect (note that the delays are still random but bounded). Obviously, a computer's resources (CPUs etc.) are always limited and the number of events that can be handled with a defined response-time goes hand-in-hand. Therefore, properly designing a real-time application requires careful planning and insight. The difference to a general purpose OS is often only visible under heavy load where the real-time OS maintains low-latency whereas under a general-purpose OS you will observe occasional but much higher delays under load. The classic trade-off is throughput vs. latency, i.e., a real-time OS has quicker and deterministic response time but less throughput than a general-purpose OS. This would be a long answer to the second part of Q1. I don't understand the first part ("...indicating seconds. But the result is not."). Q2: use a real-time OS such as linux with RT_PREEMPT. However, to properly use its features I'd suggest some studying of the topic to a point where you can answer your questions yourself. In my experience with this kind of OS you can expect max. latencies of several 10s of micro-seconds. Note that is pretty much what you observe already; however, I'm not sure if your system could maintain this performance under load. Q3: if you really need deterministic timing of < 10us accuracy then you have to look for hardware solutions (fpga). HTH - Till On 5/2/23 07:30, 김대영 via Tech-talk wrote:
| ||||||||||||||||
ANJ, 02 May 2023 |
·
Home
·
News
·
About
·
Base
·
Modules
·
Extensions
·
Distributions
·
Download
·
· Search · EPICS V4 · IRMIS · Talk · Bugs · Documents · Links · Licensing · |