Hi:
The archive engine basically stores values on change, or less frequently.
For details see chapter 2 of
https://ics-web.sns.ornl.gov/kasemir/archiver/manual.pdf because the ideas go back to the Channel Archiver.
It would be silly to write the same, unchanged value over and over. Plus we want to store the original time stamp of the PV as set by the IOC, not the time when it was received by the archive engine or written to Oracle/MySQL/Postgres. For retrieval, this
mostly means two things:
When you are interested in data from time1 to time2, you need to start with the last sample at-or-before time1.
Especially when you look at a "setpoint" type of channel which is only changed by operators, so the last change might have been two weeks ago, but you want to see the value "now" (which is still the unchanged value from two weeks ago). See section 2.7.2
in the old manual.
For MySQL, that means
SELECT smpl_time, nanosecs
FROM sample
WHERE channel_id=? AND smpl_time<=?
ORDER BY smpl_time DESC, nanosecs DESC LIMIT 1
For what it's worth, this is the biggest problem with the RDB-based archive. Selecting that single first sample is what can take most of the time when reading data. The following "SELECT smpl_time, nanosecs FROM sample WHERE channel_id=? AND smpl_time BETWEEN
? AND ?" is usually pretty spiffy, even if it's a million samples.
Secondly, when you look at data for more than one channel, you'll need to somehow arrange the raw data for the channels which will all have their original time stamps into a spreadsheet-type arrangement, using for example 'staircase' or 'linear' interpolation,
see section 2.7.3 of the old manual.
Thanks,
Kay
Dear all,
I have a question regarding recovering data archived by epics using mysql. I use a python code to automatically recover and analyse a set of data. Some of these data are constant and compressed by EPICS so when I extract them with respect to a
given data stamp, it gives “nan”. The only way to get value is to have a variation. Looping the time stamps is not feasible as I can easily miss the right time stamp. Do you of of any way that could allow to directly extract constant compressed data ?
Here is the code I use for the extraction :
def db_fetch1(variable, time1, time2):
import pymysql
import numpy as np
conn = pymysql.connect(host='accmysqlslv', port=3306, user='archive', passwd='archive', db='archive2')
cur = conn.cursor()
command = "select float_val from sample s where s.channel_id = (select channel_id from channel where name = '" + variable + "') and smpl_time >= '" + time1 + "' and
smpl_time < '"+ time2 + "'"
cur.execute(command)
output = []
for row in cur:
if np.isreal(row[0]) == True:
output.append(row[0])
cur.close()
conn.close()
output = np.array(output,dtype=float)
return output
Thank you,
Best regards,
Adnan Ghribi
CNRS - GANIL
PhD,
Adnan GHRIBI
|
|
|
|
|
|
|