If I understand correctly, you want to report or act on PV changes at a fixed rate that is slower than you expect the real rate of change for the PVs. I would probably start with something
like this (for brevity it uses 8 PVs, but I think this should scale to 100s of PVs without change and to 1000s PVs without change except to worry about the i/o rates of the print() calls (and you probably wouldn't really be printing anyway):
####
import epics
import time
pvnames = ['S:SRcurrentAI',
'S:SRlifeTimeHrsCC',
'13IDA:DMM1Ch11_calc.VAL',
'13IDA:DMM1Ch12_calc.VAL',
'13IDE:userTran1.A',
'13IDE:userTran1.B',
'13IDE:userTran1.C',
'13IDE:userTran1.D']
pvs = [epics.PV(name) for name in pvnames]
MESSAGE_DELAY = 2.0 # delay time in seconds
while True:
try:
time.sleep(MESSAGE_DELAY)
print("# Time : %s" % (time.ctime()))
for pv in pvs:
print(" %s %s" % (pv.pvname, pv.char_value))
except KeyboardInterrupt:
break
####
Here, the PV objects are internally monitored, automatically updating their values on each real change event. We ignore the events and only use the fact that the values will be up-to-date
after we've slept for a short, fixed time. That will act on (here 'print()') every PV at a fixed interval. FWIW, pyepics 'caget()' and 'caput()' are built upon PV objects (so that connections are not re-established), so that the most naive script of
###
import epics
import time
pvnames = ['S:SRcurrentAI',
'S:SRlifeTimeHrsCC',
'13IDA:DMM1Ch11_calc.VAL',
'13IDA:DMM1Ch12_calc.VAL',
'13IDE:userTran1.A',
'13IDE:userTran1.B',
'13IDE:userTran1.C',
'13IDE:userTran1.D']
MESSAGE_DELAY = 2.0 # delay time in seconds
while True:
try:
time.sleep(MESSAGE_DELAY)
print("# Time : %s" % (time.ctime()))
for name in pvnames:
print(" %s %s" % (name, epics.caget(name, as_string=True)))
except KeyboardInterrupt:
break
####
would give the same output, and not be very different in performance.
If you want to act only when the PV actually changes but not necessarily on *every* change, I would suggest testing the timestamp in the event handler, and ignore events that are too
soon after the last change, perhaps something like this:
####
import epics
import time
pvnames = ['S:SRcurrentAI',
'S:SRlifeTimeHrsCC',
'13IDA:DMM1Ch11_calc.VAL',
'13IDA:DMM1Ch12_calc.VAL',
'13IDE:userTran1.A',
'13IDE:userTran1.B',
'13IDE:userTran1.C',
'13IDE:userTran1.D']
MESSAGE_DELAY = 2.0
last_update = {}
def onchange(pvname, value=None, char_value=None, timestamp=0, **kws):
if timestamp < (last_update.get(pvname, 0) + MESSAGE_DELAY):
# un-comment this next line to see what you are missing!
# print(" <ignored event for %s>" % (pvname))
return
else:
print("%s, value=%s at %s" % (pvname, char_value, time.ctime()))
last_update[pvname] = timestamp
pvs = [epics.PV(name, callback=onchange) for name in pvnames]
# run, wait for changes (or run the rest of your app)
while True:
try:
time.sleep(0.05)
except KeyboardInterrupt:
break
####
In this version, we look at the timestamp each change event (by default, the TIME_* DBR type for a PV so that those timestamps are from the record) and ignore events that are too recent.
All of these rely on the default behavior for pyepics PVs that they are monitored internally and updated asynchronously by threads in the CA library, but you should not need to worry
about this threading on the client end - it all happens in the background. The use of a rich PV (with internal monitor and connection callbacks and using the TIME_* DBR variant by default) does use a few more resources than a bare channel access connection.
The expectation is that if you're using Python, programmer time is more valuable than counting bytes on the IOC. But, if you're close to saturating an IOC or your network, you may want to be more careful.