1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 <2024> 2025 | Index | 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 <2024> 2025 |
<== Date ==> | <== Thread ==> |
---|
Subject: | RE: EPICS PLC5 Support |
From: | Josh West via Tech-talk <tech-talk at aps.anl.gov> |
To: | Josh Fiddler <josh at themathochist.io> |
Cc: | "tech-talk at aps.anl.gov" <tech-talk at aps.anl.gov> |
Date: | Fri, 23 Feb 2024 19:35:12 +0000 |
This Message Is From an External Sender
This message came from outside your organization.
Messrs. Lange and Fiddler, Thank you both for the technical and reliability feedback! This data will help as I proceed through testing. V/R Joshua West Lower Colorado River Authority | Hydro Control Systems Administrator O:
512-793-3054 josh.west at lcra.org From: Josh Fiddler <josh at themathochist.io>
CAUTION - EXTERNAL EMAIL Hey Josh, fellow Josh here I can speak to React Automation Studio specifically. For two years it ran a simple set of PLC and custom embedded circuits and prototypes on integrated circuits and SOCs like raspberry pis, hosting in docker containers and deployed as a
swarm to mixed environments of Windows and Linux environments. I can say without a doubt that EPICS is the gold standard for stability. Any “instability” as such will arise from the database and driver programming that you will do so clear simple interface design and integration testing is a must and
William Duckitt who designed RAS has all the experience and expertise to aid with implementation as he did for me when I was tossed/leaped into the deep end at Fuse Energy Technologies and designed the controls and DAQ system with very limited experience and
expertise. The UI and custom and builtin react components are in my opinion superior to the other offerings that rely on mixing stacks to an unwieldy degree. Keeping the stack to a stack that any web developer has in their back pocket, that a full stack developer
could integrate EPICS knowledge quickly to get you going on any hardware. Understanding your hardware is largely PLCs, there StreamDevice is your friend for standard interfaces but there are other options for devices with novel communications interfaces as well. I tried to have uniform communications as much
as possible favoring TCP/IP based solutions as my background in networking and HPC eliminated having to fiddle too much with more sophisticated DAQ hardware like VME bus, isolating that to a different problem and a different container. I was responsible for the code, and built prototypes with RPi-based integrated circuits for remote and local control and DAQ. I had a partner who designed the PCBs and worked on power systems another who built the analog circuits and plc
based systems. I was the only person with any development experience having built REST APIs for other data laden applications. If it weren’t for React Automation Studio and docker multi architecture builds that project would have been way harder. Recently they released V5 so lots of good new stuff too! The systems ran flawlessly for my time there and as far as I know they are still part of the infrastructure. Any issues we encountered were due to our implementations. I hope that helps. Josh Fiddler Problem Solver Polymath Accidental DAQ and Control Systems Architect
ZjQcmQRYFpfptBannerEnd Hi Joshua, Let me try a partial answer... On Thu, 22 Feb 2024 at 15:44, Josh West via Tech-talk <tech-talk at aps.anl.gov> wrote: 1. Is the current build stable enough to run indefinitely with no EPICS-caused outages? Tough question for a system that only has been around for 40 years. :-) Most systems in EPICS installations undergo regular electrical maintenance procedures (typically once a year). But there may be a few systems (think telescopes, cryo plants) with long runners. @all: Does anyone have outstanding long-runner IOCs? Personally, I have a lot of trust in the core parts of the IOC software. Resource leaks and instabilities usually turn up and get fixed pretty quickly. It's locally developed drivers and Device Supports that often get a lot less used and tested. Since they run in the same process as the core parts, a bug in a driver can degrade or crash your IOC process. There are ways to run the IOC processes that would restart them immediately in the unlikely case of a crash, so that clients would just see a disconnect/reconnect flicker. Also, system architecture can help. E.g., handling the long upstream connection and health monitoring in one IOC and the local controller connection in a separate one wouldn't leave the remote end blind if the controller IOC is in trouble. The EPICS network protocols (Channel Access, PV Access) are scalable and robustly handling disconnect/reconnect situations. If you have outages in your network, the EPICS layer will not significantly add to it. 5. Are there any plans/projects to implement more stringent information security mechanisms into the system (e.g., something consistent with NIST, IEC, etc. for critical infrastructure)? There is a current project to add TLS (that would be IEC 62351) to the newer PV Access protocol. Prototype stage. Certificate handling is the hard part that's still ahead. (See the last talk during the last EPICS Collaboration Meeting [1]
for more details.) The older Channel Access protocol can be run through SSH tunnels to achieve a comparable level of security. Cheers, [1]
https://conference.sns.gov/event/258/timetable/?view=standard
|