EPICS Controls Argonne National Laboratory

Experimental Physics and
Industrial Control System

1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  <2024 Index 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  <2024
<== Date ==> <== Thread ==>

Subject: Re: External: Re: Generic EPICS IOCs
From: Anders Lindh Olsson via Tech-talk <tech-talk at aps.anl.gov>
To: Niko Kivel <Niko.Kivel at LIGHTSOURCE.CA>
Cc: "tech-talk at aps.anl.gov" <tech-talk at aps.anl.gov>
Date: Fri, 19 Jan 2024 08:39:37 +0000

Hi Niko,

 

If you are curious about e3 you can find out more from our tech docs (build from https://gitlab.esss.lu.se/e3/e3.pages.esss.lu.se; hosted at http://e3.pages.esss.lu.se/ but it is temporarily inaccessible from external networks), and you can find our version of require at https://gitlab.esss.lu.se/e3/wrappers/e3-require (especially note the CHANGELOG).

 

We have some additional conceptual changes compared to PSI (e.g. wrappers and a build front-end – we also have a version which uses conda for package and environment management). Find more info about these in the e3 tech docs.

 

 

Cheers

A  

 

From: Tech-talk <tech-talk-bounces at aps.anl.gov> on behalf of Niko Kivel via Tech-talk <tech-talk at aps.anl.gov>
Reply to: Niko Kivel <Niko.Kivel at LIGHTSOURCE.CA>
Date: Thursday, 18 January 2024 at 17:06
Cc: "tech-talk at aps.anl.gov" <tech-talk at aps.anl.gov>
Subject: Re: External: Re: Generic EPICS IOCs

 

Hi all

 

Here, at the Canadian Lightsource, we're in the process of adopting the original require that Dirk Zimoch came up with at PSI, and is the basis of E3 as far as I know.
If I want to run IOCs in a container, I simply provide the container with access to the respective location on the network file system. I did it as a proof of concept. But we don't do that in production and currently also don't look into that alternative. Since it works with the original
require, and afaik E3 uses it under the hood and mostly deals with the nuances of the compilation, there is a very high probability that it works in a container as well.

We use docker exclusively for building the require-modules, not for running IOCs.

 

Timo provided a very nice summary of the benefits of this modular approach, and I can double all his statements. Imho, the benefit of identifying "what is running where" is reason enough to use it.

 

my 2 cents about require, skip if you don't care about my personal preferences/opinion.

I was spared the experience of vanilla EPICS when I was at PSI, thank out Dirk! It's safe to say, the shock from first exposure to it, made me clone require and setup the environment within the first week of my current position.

The modularity and dependency resolution removes a lot of overhead during IOC development. The integrator can focus on the features of the IOC and doesn't have to care about all the building blocks. Why would an IOC developer need to know that streamDevice needs asyn, which needs calc? The task is to write a device support with streamDevice, not become an expert in manually resolving dependencies. 

 

Best,

Niko


From: Tech-talk <tech-talk-bounces at aps.anl.gov> on behalf of Timo Korhonen via Tech-talk <tech-talk at aps.anl.gov>
Sent: Thursday, January 18, 2024 5:30 AM
To: Han Lee <jeonglee at lbl.gov>; Knap, Giles (DLSLtd,RAL,LSCI) <giles.knap at diamond.ac.uk>
Cc: tech-talk at aps.anl.gov <tech-talk at aps.anl.gov>
Subject: External: Re: Generic EPICS IOCs

 

Dear Han and all,

 

I am sorry if I pushed you, Han, too much with E3 development. We appreciate a lot the work you did and have been building upon it.

The environment has evolved really well and we are now running over 1000 IOCs with it. There are sooo many things I like about our setup that I cannot put them all in an email. I just write here that:

 

  • First of all, it is stable – this is a worry that I hear from many people. I am not exactly sure why, maybe because modules are loaded at IOC startup. But pretty much every modern software does that.
  • It makes it very easy to keep our IOCs up to date, especially when it comes to updating common infrastructure modules like recsync, logging services, etc. No need to hunt down the IOC developer (who may have left) to rebuild the IOC. System-specific modules can be left untouched.
  • The development and deployment processes are unified, we can easily track down what modules and which versions each IOC is using.
  • New integrators, or IOC engineers, state that they prefer this approach to the “vanilla” method. This has been a big thing for us.
  • Every IOC instance is very “thin”, basically consisting of a startup script and a few configuration files.
  • etc. etc. I could go on for a long time. But in fact, I would like my colleagues to present their work instead of me writing.

 

And by the way, the system also supports Area Detector, which saves many IOC engineers from the trouble of figuring out the compilation details.

 

One more thing to be added is that what makes our setup work so well is integration with our deployment and monitoring system. E3 is only one part in the puzzle.

 

Of course, this comes with a price of having a group of people to build and support that. That said, most of the work is done if anybody would like to try it out, it is not necessary to start from scratch. And to be honest, it has required a lot of work. I would say though that the investment has paid off. It would not have happened if we did not have the talented developers that have done amazing work on it.

 

But for each their own. If people prefer to compile each individual IOC, or use some other method, OK with me.

I just know that E3 works for us, and works very well. It has rescued us from the initial chaos we had at the time when you were with us, Han.

 

Best regards,

 

Timo

 

ps. E3 would probably work in a container environment as well, I was pointed out. Then the containers could be truly versatile.

 

 

From: Tech-talk <tech-talk-bounces at aps.anl.gov> on behalf of Han Lee via Tech-talk <tech-talk at aps.anl.gov>
Reply to: Han Lee <jeonglee at lbl.gov>
Date: Wednesday, 17 January 2024 at 18:44
To: "Knap, Giles (DLSLtd,RAL,LSCI)" <giles.knap at diamond.ac.uk>
Cc: "tech-talk at aps.anl.gov" <tech-talk at aps.anl.gov>
Subject: Re: Generic EPICS IOCs

 

Hi Giles,

 

As a person who designed and laid down the fundamental ground of the E3 under the vision of my beloved chief engineer, Timo, I got your point well. 

The E3 is the ESS EPICS Environment diverted from and based on the PSI EPICS Environment.

 

However, personally, I wouldn't say I like that approach. ;)

 

Each site has its issues in selecting whatever approaches were established. It will be fine if your site has enough resources and pieces of knowledge to maintain them.

 

The EPICS collaboration meeting is where you can share your work with our community. I think you should share your architecture at upcoming events. Then, sometimes, you can find a group of people who are interested in your architecture.  During events, you can enjoy sharing your vision with them as well.

 

Best,

Han

 

On Wed, Jan 17, 2024 at 7:19 AM Knap, Giles (DLSLtd,RAL,LSCI) via Tech-talk <tech-talk at aps.anl.gov> wrote:

Thanks for all the responses.

 

I'll try to address as many of the points as I can.

 

Our approach is to have a very flat structure, an EPICS base container and then one container that compiles the support for a particular class of device plus all of the dependencies of that support. It will also install Debian packages for any  system dependencies if required.

 

We experimented with a more hierarchical structure initially but container dependency management became unwieldy.  Instead, having all support modules compiled together so that the dependencies are all inside the one container really simplifies things. (Perhaps I could draw parallels to the way Arch Linux installs things)

 

One example that we have extensively tested is GigE cameras which use the ADAravis support module. You can see how this is made in the Dockerfile here  https://github.com/epics-containers/ioc-adaravis/blob/main/Dockerfile. Note that we include some generally useful support like autosave and iocStats.

 

It is absolutely the case that this is targeting a device that we have hundreds of and it is beneficial to this site to only have to compile it once then deploy multiple times. Each beamline has a repo of ioc instances that are described using YAML. Deploying the IOC instance is a matter of telling Kubernetes to combine the generic ADAravis container with some config that specifies which camera to connect to, what PV prefixes to use and which  areadetector plugins you want. These choices determine what the 

startup script and Database look like.

 

The generic IOC concept works for this case but the framework can also have individual IOCs. You can make a specific IOC container image using a Dockerfile that combines whatever support you want and embed the yaml description (or traditional startup script/ DB if preferred).  Or you can make your specific IOC image by deriving from a Generic IOC and adding whatever you please.  Containers are perfectly good at inheritance, but not that good for composition. I don't know anything about E3 and would appreciate some pointers as it sounds like it is handling the composition problem.

 

So the generic  IOC is there to keep down the proliferation of container images, but the benefits of using containers and orchestration are still valuable even if you have mostly individually built containers. Clearly whenever individually compiled code like Sequencer state machines etc are involved, this needs a non-generic container. 

 

One of the ways we are mitigating need for custom combinations of support is that we are as far as possible making fine grained IOCs that are one to one with a device. Historically DLS diagnostic camera IOCs managed a hutch or two's worth of cameras, those are being split into one per camera.  Kubernetes is good at managing lots of small services and allocating CPU/Memory resources accordingly.

 

I'll finish by retracting the phrase "viable for use in any facility" and replace with "viable for use at more than one facility"  :-)

 

Regards,

giles

 

 

 


From: Tech-talk <tech-talk-bounces at aps.anl.gov> on behalf of Andrea michelotti via Tech-talk <tech-talk at aps.anl.gov>
Sent: 16 January 2024 17:39
To: tech-talk at aps.anl.gov <tech-talk at aps.anl.gov>
Subject: Re: Generic EPICS IOCs

 

Hello,

At INFN-LNF, we adopted dockerization and Kubernetes in 2019 to efficiently manage one of our beam test facilities using our custom control system, !CHAOS. Through our experiences, we have become increasingly convinced of the benefits offered by a dockerized and orchestrated approach in control systems. I think this is especially valuable in addressing and mitigating the complexities introduced by an older, non-object-oriented control system like EPICS.
Considering our limited manpower for supporting custom control systems in upcoming accelerator infrastructures, we have decided to embrace EPICS, despite being relatively new to it. Starting afresh, our goal is to blend new technologies with the utilization of EPICS. We closely follow Giles' initiatives at Diamond and actively work to integrate his proposed workflows into our own, aiming for generality and maximum reuse of controls. I particularly appreciate the idea of having a tool like ibek that allows instantiation of IOCs from .yaml(s). This, combined with support for 'generic containers', offers a clean and efficient way to create 'reusable controls.'

Best regards,
Andrea

Il 15/01/24 12:36, Knap, Giles (DLSLtd,RAL,LSCI) via Tech-talk ha scritto:

Good day All,

 

I would like to canvas opinion on the viability of the concept of Generic IOCs.

 

At DLS, we have been working on moving our IOCs to Kubernetes for a while.  Part of our approach is to use the concept of Generic IOCs for our container images as follows:

 

  • We build all the support for a given class of device into a single container image. This 'Generic IOC' has an IOC binary but no startup script or EPICS DB.
  • An IOC instance is a pointer to a Generic IOC image plus enough information to generate the startup script and DB at runtime.

 

 

So this means that we have a generic version of epics base built into a container image and then one generic IOC container image based upon that for each device. In none of these are any site specific changes made, all the source we use is upstream and untouched (with one or two small exceptions for compilers to work in our base OS).

 

The intention is that these Generic IOCs are viable for use in any facility that chooses to use containers to deploy its IOCs (using any container runtime that is OCI compliant).

 

Is this a bad idea? are there site specific options that cannot be avoided, and cannot be applied at runtime? 

 

Any thoughts much appreciated.

 

Regards,

giles

 

-- 

This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd.
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
 

 

-- 
---------------------
dr. Andrea Michelotti
Head of Control Systems Service
 
INFN - Laboratori Nazionali di Frascati
Accelerator Division,
Bldg.2, Room 120
Via Enrico Fermi, 40
00044 Frascati (RM)
 
e-mail:   andrea.michelotti at infn.it
office:   (+39) 06.9403.2272
mobile:   (+39) 06.9403.8203
fax   :   (+39) 06.9403.2256
Teams :   amichelo at infn.it
LinkedIn: http://it.linkedin.com/in/michelotti

 

-- 

This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd.
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
 



--

Jeong Han Lee, Dr.rer.nat.

Staff Scientist and Engineer

Cell: +1 510 384 3868

https://orcid.org/0000-0002-1699-2660


Replies:
Re: External: Re: Generic EPICS IOCs Timo Korhonen via Tech-talk
References:
Generic EPICS IOCs Knap, Giles (DLSLtd,RAL,LSCI) via Tech-talk
Re: Generic EPICS IOCs Andrea michelotti via Tech-talk
Re: Generic EPICS IOCs Knap, Giles (DLSLtd,RAL,LSCI) via Tech-talk
Re: Generic EPICS IOCs Han Lee via Tech-talk
Re: Generic EPICS IOCs Timo Korhonen via Tech-talk
Re: External: Re: Generic EPICS IOCs Niko Kivel via Tech-talk

Navigate by Date:
Prev: Re: Problems setting motor position on Galil controller Torsten Bögershausen via Tech-talk
Next: Danfysik system 8000 and 9100 power supplies Donny Domagoj Cosic via Tech-talk
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  <2024
Navigate by Thread:
Prev: Re: External: Re: Generic EPICS IOCs Niko Kivel via Tech-talk
Next: Re: External: Re: Generic EPICS IOCs Timo Korhonen via Tech-talk
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  <2024
ANJ, 19 Jan 2024 Valid HTML 4.01! · Home · News · About · Base · Modules · Extensions · Distributions · Download ·
· Search · EPICS V4 · IRMIS · Talk · Bugs · Documents · Links · Licensing ·