Hi Everyone, thank you for the responses. I wanted to write back to everyone
@murali
I think for this experiment its only 6000 PVs and about 100-150GB of data stored (most likely its all sitting the LTS). I can try creating smaller requests of PVs and then combining the JSONs at the end.
@ montis
* how have you configured NAS mount point? NFS? ISCSI?
I am using the NFS-Common library on Ubuntu.
* what are the parameters used in your appliance for STS/MTS/LTS (policies.py)?
I have the default settings that came with the archiver setup from Hans repo.
https://github.com/jeonghanlee/epicsarchiverap-env
* how is your firewall configuration? Selinux? (if used)
I have spoken with our IT and there should be no firewall, it’s a closed network for our controls testing.
@ Michael
“Sorry if I'm a little confused by the above. Is your python script testing retrieval via AA?
That is correct it is just requesting directly from the AA using the getDataAtTime url and I use POST with the PV list I want to request
url = ''
It might be interesting to start with some storage access benchmarks. AA retrieval is never going to be faster than this. At simplest, time how long it takes to read some files from the NAS vs. local. Something like:
> dd bs=1M of=/dev/null if=/arch/lts/whatever.pb
I will give this a try and get back to you, thanks for this.
Thanks everyone!
From: Tech-talk <tech-talk-bounces at aps.anl.gov> On Behalf Of
Shankar, Murali via Tech-talk
Sent: Saturday, February 12, 2022 8:04 AM
To: tech-talk at aps.anl.gov
Subject: Re: Epics Archiver Appliance and Network Storage slowdown
>> We are using the getDataAtTime endpoint
>> 40MB file from the archive
By this I'd guess that you are getting data for several 100,000's of PVs? The getDataAtTime API call has to look at all these 100,000 files ( with perhaps cache-misses
for most of them ) and then do binary search to determine the data point. Your NAS needs to support quite a high rate of IOPS to for this to come back quickly. And this is a usecase where even the smallest latency tends to accumulate quickly. Perhaps you can
consider breaking down your request into smaller chunks when using the NAS?