2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 <2021> 2022 2023 2024 | Index | 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 <2021> 2022 2023 2024 |
<== Date ==> | <== Thread ==> |
---|
Subject: | RE: [EXTERNAL] CA gateway chaining |
From: | Mark L Rivers via Core-talk <core-talk at aps.anl.gov> |
To: | "Pearson, Matthew R." <pearsonmr at ornl.gov>, Timo Korhonen <Timo.Korhonen at ess.eu>, Ralph Lange <ralph.lange at gmx.de> |
Cc: | "core-talk at aps.anl.gov" <core-talk at aps.anl.gov> |
Date: | Mon, 1 Nov 2021 22:43:49 +0000 |
Hi Matt, We are using PVAccess with JPEG compression and the ImageJ client on a number of beamlines with no issues. I just tested with a 2.3 Mpixel camera, Mono 8 mode. NDPluginCodec can do JPEG compression at 20-30 frames/s in a single thread. ImageJ can display at about 25 frames/s. Mark From: Pearson, Matthew R. <pearsonmr at ornl.gov> I should also mention we use the areaDetector update rate throttling feature as well, in addition to the binning and re-scaling. Usually the standard array plugin does the throttling so that it doesn’t send out arrays faster than between
1Hz-10Hz. Mostly 1-2Hz. The 10Hz (or faster) is sometimes useful for camera-like applications, but then channel access is not the best solution anyway.
I would be interested in seeing how PVAccess (and various clients) performs for compressed camera images that we want to display at 20-30Hz. In the past I’ve used MJPG via the ffmpeg areaDetector plugin, but I found it was unreliable and
occasionally stopped working, so I switched back to channel access which was slower but was stable.
Cheers, Matt From: Timo Korhonen <Timo.Korhonen at ess.eu>
Hi Matt, Thank you for these numbers. I do not even dare to publicly state what kind of data volumes some of our (accelerator) users are asking for… I think we need to seriously consider binning. Cheers, Timo From:
"Pearson, Matthew R." <pearsonmr at ornl.gov> Hi Timo, Yes, what Mark said. For example, a typical machine vision camera or astronomy CCD might be 2048*2048*2 bytes = 8.4MB. So I might choose to use 2x2 or 4x4 binning on this, via the areaDetector ROI plugin, to get down
to either 2MB or 0.5MB (depending on how large the image widget will be on the OPI). And, in addition, I could use the processing plugin to scale to 8-bit greyscale data, so now we get down to 1MB or 0.25MB.
In practice, a monitor will only display up to 8-bit greyscale anyway, and humans can’t see a full 16-bit range of greyscale either. Similarly, our neutron time-of-flight 1-D spectrums are multiples of 160,000 * 4 bytes = 0.64MB. So these are binned by a factor 10 to get to 64KB.
Cheers, Matt From: Mark L Rivers <rivers at cars.uchicago.edu>
Hi Timo, If your arrays are coming from areaDetector then binning can easily be done in the IOC using NDPluginROI. If the arrays are images then you may also be able to use NDPluginCodec to compress them, for example with the JPEG compressor. areaDetector
provides Linux and Windows decompressors that can be called from your PVA client. Those are used from the ImageJ plugin, for example. Mark From: Core-talk <core-talk-bounces at aps.anl.gov>
On Behalf Of Timo Korhonen via Core-talk Do you do the binning in the IOC? This is something we have thought of but not implemented yet. Timo From:
"Pearson, Matthew R." <pearsonmr at ornl.gov> Hi, We have seen similar issues and one way we dealt with it was to provide a heavily re-binned array for use via gateways. Inside the beamline network we provide both a full un-binned array, hidden
behind ‘detailed’ buttons, as well as the re-binned array screens that users will first encounter on higher level screens. We also limited the array size that the gateway will transmit, so some of the larger arrays don’t work which forces people to view the
re-binned arrays. Also, depending on the original data type and software, you can ‘compress’ the array data from 16 or 32-bit down to 8-bit data by re-scaling it to 0-255, like a JPEG image. Then the waveform data
type can be set to UCHAR. The above only works if the waveforms are simply for visualization. Cheers, Matt From: Core-talk <core-talk-bounces at aps.anl.gov>
On Behalf Of Timo Korhonen via Core-talk Right, this is the way we are going for now. It is not exactly trivial to manage but should be doable.
Thanks, TImo From:
Core-talk <core-talk-bounces at aps.anl.gov>
on behalf of EPICS Core Talk <core-talk at aps.anl.gov> On Wed, 27 Oct 2021 at 10:49, Timo Korhonen via Core-talk <core-talk at aps.anl.gov> wrote:
One way to mitigate this - I think this approach was implemented at some point at the SLS - is to route the large array channels through a dedicated separate gateway instance. (Ease of configuration depends on your naming convention.) Cheers, |