Dear Colleguages,
Machine learning applications in accelerator controls are indeed gaining popularity, and there are exciting developments in progress. However, concerns persist regarding equipment protection, particularly when dealing with black-box ML
models that may make risky decisions, especially during optimization iterations.
When it comes to ML model generation, utilizing archived data is a viable approach. However, during the application phase, these models may still generate audacious decisions. Even when trained with live data, the risk remains.
As far as I know, leveraging Channel Access security configuration is a sound strategy to manage PV write permissions at a granular level, covering individuals, groups, and workstations. This level of control ensures that the ML code's
write permissions can be finely tuned. I’m still wondering is this way totally secure?
Absolutely, incorporating the machine protection system as the primary safeguard on the device side is crucial. Your valuable insights/experience on this subject are greatly appreciated.
Thanks,
Tong
--
Tong Zhang, Ph.D. (he/him)
Controls Physicist
Facility for Rare Isotope Beams,
Michigan State University