1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 <2023> 2024 2025 | Index | 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 <2023> 2024 2025 |
<== Date ==> | <== Thread ==> |
---|
Subject: | Re: [EXTERNAL] Fwd: MongoDB Implementation using EPICS 7 |
From: | "Kasemir, Kay via Tech-talk" <tech-talk at aps.anl.gov> |
To: | EPICS Tech Talk <tech-talk at aps.anl.gov>, "Jemian, Pete R." <jemian at anl.gov> |
Date: | Thu, 23 Mar 2023 13:42:24 +0000 |
Hi:
> One could think of using MongoDB as a backend for an EPICS archiver (which records time series of the configured PVs) but there are better backends to choose for such time-series data.
I briefly looked into that many years ago.
MongoDB allows storing arbitrary data. Each "value" can be a different JSON-type structure. For archived data, I didn't see a huge benefit because we do have structured data. Each sample is basically a time stamp and a value, then millions of those. OK, different
PVs might have slightly different data types, but I don't need to store a sequence of random structures. That was before PV Access. With PV Access, you might argue that you do want to store arbitrary structures, but then you also need to decode these with
custom clients to make sense of the data. So I'd say the main use case for an archive is still a large number of timestamp + value.
When I then compared MongoDB storing many samples with timestamp, value(, alarm info) against MySQL using a plain table of timestamp, value, alarm info, performance was roughly the same, so I stopped it there.
Cheers,
Kay
|