1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 <2022> 2023 2024 | Index | 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 <2022> 2023 2024 |
<== Date ==> | <== Thread ==> |
---|
Subject: | RE: Epics Archiver Appliance and Network Storage slowdown |
From: | "Manoussakis, Adamandios via Tech-talk" <tech-talk at aps.anl.gov> |
To: | "Shankar, Murali" <mshankar at slac.stanford.edu>, "tech-talk at aps.anl.gov" <tech-talk at aps.anl.gov> |
Date: | Mon, 21 Feb 2022 06:43:35 +0000 |
Thanks Murali, I will try to run some more tests including the one that was mentioned to make sure the transfer rates look correct. I cant imagine the local vs NAS on this small of a setup shouldnt be vastly different. From: Tech-talk <tech-talk-bounces at aps.anl.gov> On Behalf Of
Shankar, Murali via Tech-talk >> I think for this experiment its only 6000 PVs I think that should not take this long. Will look into this a bit here as well. Regards, Murali From: Shankar, Murali >> We are using the getDataAtTime endpoint >> 40MB file from the archive By this I'd guess that you are getting data for several 100,000's of PVs? The getDataAtTime API call has to look at all these 100,000 files ( with perhaps cache-misses
for most of them ) and then do binary search to determine the data point. Your NAS needs to support quite a high rate of IOPS to for this to come back quickly. And this is a usecase where even the smallest latency tends to accumulate quickly. Perhaps you can
consider breaking down your request into smaller chunks when using the NAS? Regards, Murali |