1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 <2018> 2019 2020 2021 2022 2023 2024 | Index | 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 <2018> 2019 2020 2021 2022 2023 2024 |
<== Date ==> | <== Thread ==> |
---|
Subject: | RE: Latency between NI LabVIEW RT (cRIO) and EPICS on RT Linux |
From: | Mark Rivers <[email protected]> |
To: | "'Mooney, Tim M.'" <[email protected]>, "Johannes Spinneken (EGI)" <[email protected]>, "[email protected]" <[email protected]> |
Date: | Tue, 6 Feb 2018 20:40:19 +0000 |
Hi Johannes, Here is one quick measurement. -
I have an areaDetector simDetector running on a Linux machine. It has a small image size (50x50) so it can run very fast. It is doing about 5,500 frames/s. -
There is an image counter PV that updates each time there is a new image, i.e. at 5,500 Hz. -
I ran “camonitor” on a different Linux machine to see how fast it received the monitor updates on that PV. I used –tcs so we see both the CA client timestamps and the CA server timestamps. I sent the camonitor output to a file so the
terminal I/O would not slow it down. -
Both of these machines have 10GB Ethernet connections. This is the result, covering a time period of about 10 ms. Viper:~>camonitor -tcs 13SIM1:cam1:ArrayCounter_RBV > test.out ^C Viper:~>more test.out 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.602116(2018-02-06 14:34:30.602287) 4182302 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.602320(2018-02-06 14:34:30.602489) 4182303 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.602505(2018-02-06 14:34:30.602611) 4182304 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.602696(2018-02-06 14:34:30.602888) 4182305 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.602899(2018-02-06 14:34:30.603005) 4182306 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.603127(2018-02-06 14:34:30.603238) 4182307 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.603344(2018-02-06 14:34:30.603470) 4182308 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.603536(2018-02-06 14:34:30.603642) 4182309 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.603727(2018-02-06 14:34:30.603838) 4182310 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.603899(2018-02-06 14:34:30.604019) 4182311 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.604109(2018-02-06 14:34:30.604228) 4182312 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.604318(2018-02-06 14:34:30.604440) 4182313 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.604506(2018-02-06 14:34:30.604606) 4182314 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.604692(2018-02-06 14:34:30.604842) 4182315 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.604884(2018-02-06 14:34:30.605005) 4182316 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.605076(2018-02-06 14:34:30.605206) 4182317 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.605260(2018-02-06 14:34:30.605363) 4182318 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.605427(2018-02-06 14:34:30.605539) 4182319 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.605613(2018-02-06 14:34:30.605725) 4182320 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.605795(2018-02-06 14:34:30.605919) 4182321 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.605966(2018-02-06 14:34:30.606132) 4182322 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.606149(2018-02-06 14:34:30.606265) 4182323 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.606319(2018-02-06 14:34:30.606431) 4182324 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.606479(2018-02-06 14:34:30.606594) 4182325 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.606611(2018-02-06 14:34:30.606727) 4182326 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.606820(2018-02-06 14:34:30.606917) 4182327 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.607015(2018-02-06 14:34:30.607116) 4182328 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.607186(2018-02-06 14:34:30.607287) 4182329 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.607363(2018-02-06 14:34:30.607459) 4182330 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.607542(2018-02-06 14:34:30.607643) 4182331 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.607732(2018-02-06 14:34:30.607841) 4182332 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.607914(2018-02-06 14:34:30.608016) 4182333 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.608100(2018-02-06 14:34:30.608202) 4182334 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.608252(2018-02-06 14:34:30.608355) 4182335 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.608432(2018-02-06 14:34:30.608532) 4182336 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.608620(2018-02-06 14:34:30.608731) 4182337 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.608801(2018-02-06 14:34:30.608909) 4182338 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.608983(2018-02-06 14:34:30.609098) 4182339 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.609164(2018-02-06 14:34:30.609279) 4182340 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.609341(2018-02-06 14:34:30.609449) 4182341 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.609526(2018-02-06 14:34:30.609625) 4182342 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.609701(2018-02-06 14:34:30.609817) 4182343 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.609877(2018-02-06 14:34:30.609990) 4182344 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.610057(2018-02-06 14:34:30.610185) 4182345 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.610268(2018-02-06 14:34:30.610392) 4182346 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.610454(2018-02-06 14:34:30.610568) 4182347 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.610620(2018-02-06 14:34:30.610739) 4182348 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.610800(2018-02-06 14:34:30.610908) 4182349 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.610988(2018-02-06 14:34:30.611097) 4182350 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.611168(2018-02-06 14:34:30.611289) 4182351 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.611355(2018-02-06 14:34:30.611454) 4182352 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.611536(2018-02-06 14:34:30.611644) 4182353 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.611728(2018-02-06 14:34:30.611834) 4182354 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.611915(2018-02-06 14:34:30.612013) 4182355 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.612084(2018-02-06 14:34:30.612204) 4182356 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.612265(2018-02-06 14:34:30.612388) 4182357 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.612447(2018-02-06 14:34:30.612543) 4182358 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.612619(2018-02-06 14:34:30.612721) 4182359 13SIM1:cam1:ArrayCounter_RBV 2018-02-06 14:34:30.612806(2018-02-06 14:34:30.612915) 4182360 You can see that it does not appear to lose any monitors, and the time between monitors is just under 200 microseconds. So at least in this setup we have not yet hit the EPICS protocol or network speed limits. Mark From: [email protected] [mailto:[email protected]]
On Behalf Of Mooney, Tim M. Hi Johannes, With an interrupt service routine, vxWorks can read from or write to an IndustryPack FPGA at around 10 kHz, but I would not try to process an EPICS record at that rate - at least not a record that
isn't prepared to throttle the rate at which it posts monitors. I don't usually process general purpose EPICS records faster than around 100 Hz. I think it's pretty common practice to keep stuff faster than around 100 Hz restricted to driver code, which
doesn't really have to know about EPICS. Tim Mooney ([email protected]) (630)252-5417 From:
[email protected] <[email protected]> on behalf of Johannes Spinneken (EGI) <[email protected]> Hi I am fairly new to EPICS and have been testing an application with the following setup:
The setup works in principle, but I am seeing higher latencies than I was expecting. I was hoping to achieve latencies in the low ms order (say 1 – 2ms data round trip
from cRIO to RT Linux back to cRIO). In practice, I observe round-trip timings of order 10ms. Two questions:
(i)
What is the best timing (latency) that can generally be achieved using EPICS? Is there something fundamental in the protocol that prohibits data exchange in the low ms order? I understand that this will of
course depend on target hardware / software, but a ballpark figure would be good as “expectation management”. A typical ping between cRIO and Linux RT takes approx. 0.2ms, so the hardware seems clearly capable of <1ms timings.
(ii)
Does anyone have experience with running EPICS on LabVIEW RT / cRIO? I suspect some (most) of the latency is due to the way in which NI handles TCP traffic. I already discovered that data is normally buffered
for 10ms (or 8 kB). This can be avoided by using the Shared Variable Data Flush VI:
http://zone.ni.com/reference/en-XX/help/371361H-01/lvcomm/flushsharedvar/
which indeed reduced round-trip timings by around 8-10ms. However, I seem to be unable to push timings down further below the 10ms round-trip mark. Any guidance would be much appreciated. Many thanks! Best wishes Johannes Johannes Spinneken PhD FRSA Principal Systems Engineer Evergreen Innovations UK & USA +1 916 266 3709 (US cell) +44 7868 923583 (UK mobile) |