On 6/21/22 15:35, Pearson, Matthew via Tech-talk wrote:
Hi,
I’m currently testing transporting NTNDArray objects over PVAccess between two areaDetector IOCs using the PVA plugin (to export the data) and the pvaDriver (to receive it on the other side). Both IOCs are running on the same Linux machine at the moment.
I’m seeing poor performance using the default PV request type on the receive side:
Could you quantify what "poor performance" means? Are you only looking
at dropped updates? Do you have any observations wrt. CPU and/or network load?
field()
On the receive side I see fewer NTNDArray objects than I should, even with moderate frame rates and image sizes (50MB/s or so).
And I have been testing various options:
field()
record[queueSize=100]field()
record[pipeline=true]field()
record[queueSize=100, pipeline=true]field()
By grepping the source code, and reading some older PVRequest documentation, I think that the default queueSize is 2.
I think this is correct. fyi. this default is separately specified in pvAccessCPP and pvDatabaseCPP.
(also in pva2pva for the older p2p gateway) I think that are all "2".
In another project we have been using the “record[queueSize=100]field()” option for years with good results, and I see the best results with that same option in this NTNDArray application. That seems to fix the issue, and I can run reliably with no lost data until I run out of CPU on the test machine.
fyi. without "pipeline=true" there is no guarantee that clients won't drop updates. So unrelated
changes to system and/or network load may change things.
But I am wondering if anyone can explain these options and if setting queueSize matters if I use the pipeline=true option?
Yes, absolutely. With "pipeline=true", queueSize sets the flow control window size, which is analogous
to the TCP window size, though applied to individual subscriptions.
Has anyone else used the pipeline option?
Yes, although not as often as it isn't the default. I have to add a further caveat. I do little to no
testing with pvDatabaseCPP.
I've tested "pipeline=true" with PVXS, and the MonitorFIFO implementation in pvAccessCPP which
underpins pvas:: server API.
https://mdavidsaver.github.io/pvxs/
http://epics-base.github.io/pvAccessCPP/group__pvas.html
pvDatabaseCPP has its own implementation of the Monitor interface.
I saw strange results with the “record[queueSize=100, pipeline=true]field()” where I was sending data at 1Hz but receiving at 100Hz or so.
Could you elaborate?
What do 1Hz and 100Hz mean in this context?
Do you subscribe to a PV post()ing updates at 1Hz and somehow get 99 extra updates?
To make sure we are all on the same page, "pipeline" enables per subscription flow control messaging,
which is roughly analogous to what is done by TCP per connection. This changes the behavior when a
client side subscription buffer is full. Instead of continuing to received updates, and being forced
to do something with them (typically discarding / squashing) the client stops sending acknowledgement
messages back to the server, which will force the server to stop sending updates until the client has
free space to receive them.
This by itself only forces the server to face the same decision which the client would have. Though
a server has to confront anyway since a server side subscription buffer can also overflow if eg. a
driver tries to push too often. The benefit of "pipeline=true" is that bandwidth is not wasted
sending updates which a client will then immediately discard. (as is always the case with CA)
"pipeline" is a protocol level change, modifying a message format, which was unfortunately not
introduced with a protocol version number bump. So there is no way for a client to discover whether a
server supports it. This is why users are burdened to know about "pipeline=true" Although, at this
point in the c++ world, both pvAccessCPP and PVXS have support.
- Replies:
- RE: [EXTERNAL] Re: PVA monitor request parameters Pearson, Matthew via Tech-talk
- References:
- PVA monitor request parameters Pearson, Matthew via Tech-talk
- Navigate by Date:
- Prev:
phoebus and subarrays in displays Pierrick M Hanlet via Tech-talk
- Next:
Re: phoebus and subarrays in displays Kasemir, Kay via Tech-talk
- Index:
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
<2022>
2023
2024
- Navigate by Thread:
- Prev:
PVA monitor request parameters Pearson, Matthew via Tech-talk
- Next:
RE: [EXTERNAL] Re: PVA monitor request parameters Pearson, Matthew via Tech-talk
- Index:
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
<2022>
2023
2024
|