Hi Mark,
I was hoping you would respond. Thanks for looking into this.
On 5/30/19 12:23 PM, Mark Rivers wrote:
Hi Klemen,
I have some ideas on this, but I need to have a little more information.
About how many detectors do you have? You said 500 ports, so I assume that means 500 detectors? You would have one asyn port per detector?
Yes, 1 detector per port, 500 is max number of detectors per one
beamline so that would be worst case we have to support.
Would all ports be running in a single IOC, or multiple IOCs?
It's a single IOC, primarily because there's one dedicated fiber channel
that connects all detectors to one PCIe board. We could try to split it
up but it's been reliable and performant so far.
You have about 500 parameters per detector, i.e. per port?
That's right. In database there's potentially more connected records for
each parameter. 500 detectors x (500 param PVs + 300 connected PVs) =
400,000 PVs per single IOC. Aside from slow IOC boot, everything else
works just fine and performs great.
How many parameters are being written to, and at about what rate?
Writing parameters is rare. Maybe once a day we will change 2
parameters. About half of parameters are changing in loop when we are
calibrating, which lasts about 1 day and happens every few months.
Performance is not an issue during calibration.
How many parameters are being read, and about what rate?
Half of parameters are read only for status and are I/O Intr. We have a
trigger PV that updates them at once every 60 seconds.
How are your detectors communicating, i.e. TCP/IP or some other mechanism?
It's custom protocol over a custom made PCI express board. We have Linux
kernel driver talking to PCIe, with asynPortDriver interface to Linux
driver. There's 2.5Gbps fiber connecting from PCIe to detectors
aggregator box.
One particularity about the protocol is that we need to exchange all
configuration parameters at once (similar for status). So in the
asynPortDriver derived class we pack all parameters into one packet and
send it to detector. All writable parameters are cached in software and
there's two additional PVs, one that triggers packing and sending, and
second one is status of packing/sending operation. If number of changing
parameters is high, there would be a lot of redundancy but luckily this
is not the case. There's new detectors on horizon that will support
exchanging single parameter at a time, which is what is driving this
initiative.
Are you currently using the asyn "addr" parameter, or just the "port" to specify the detector and the drvUser field to specify the parameter?
Just the port. Due to the way we manage detector configuration this is
the easiest. I thought about using addr but haven't tried it out yet.
Is it important that a write operation actually have completed before the record processing completes?
So far write is asynchronous operation and the client has to use another
PV to wait for the status. It is desired, however, that writable PVs
would be synchronous through put callback.
Would it be OK to have status be periodically polled and use SCAN=I/O Intr?
Yes, that's what we do.
-- Klemen
Mark
-----Original Message-----
From: [email protected] <[email protected]> On Behalf Of Klemen Vodopivec via Tech-talk
Sent: Thursday, May 30, 2019 9:56 AM
To: [email protected]
Subject: How to share asyn's queue thread among ports?
Hi,
in a fast data-acquisiting application that also controls hundreds of neutron detectors we are using asynPortDriver interface for detector configuration and status communication which results in many asynPortDriver params. The communication is fast (less than 1ms for request/response round trip) but is probably still considered blocking.
Especially because there's approx 500 params per port, many are written/read in groups and if every request blocks for 1ms it could block CA thread significantly. Our workaround so far was done in database but is overly complex. The obvious solution is to use asyn's blocking mode, but there's up to 500 ports resulting in as many threads.
Ideally there would be an asynPortDriver interface to manage queue threading, which we could then overload and do custom thread management.
Is there any other approach to keep number of threads low while providing blocking mode? Would asyn patch that allows asynPortDriver do queue thread management be of broader interest?
-- Klemen
- Replies:
- RE: [EXTERNAL] RE: How to share asyn's queue thread among ports? Mark Rivers via Tech-talk
- References:
- How to share asyn's queue thread among ports? Klemen Vodopivec via Tech-talk
- Navigate by Date:
- Prev:
Re: Camonitor with client dictated update rate Johnson, Andrew N. via Tech-talk
- Next:
Re: Camonitor with client dictated update rate William Layne via Tech-talk
- Index:
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
<2019>
2020
2021
2022
2023
2024
- Navigate by Thread:
- Prev:
FW: How to share asyn's queue thread among ports? Mark Rivers via Tech-talk
- Next:
RE: [EXTERNAL] RE: How to share asyn's queue thread among ports? Mark Rivers via Tech-talk
- Index:
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
<2019>
2020
2021
2022
2023
2024
|