1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 <2023> 2024 | Index | 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 <2023> 2024 |
<== Date ==> | <== Thread ==> |
---|
Subject: | Safely proceeding machine learning applications |
From: | "Zhang, Tong via Tech-talk" <tech-talk at aps.anl.gov> |
To: | "tech-talk at aps.anl.gov" <tech-talk at aps.anl.gov> |
Date: | Tue, 29 Aug 2023 15:19:21 +0000 |
Dear Colleguages, Machine learning applications in accelerator controls are indeed gaining popularity, and there are exciting developments in progress. However, concerns persist regarding equipment protection, particularly when dealing with black-box ML
models that may make risky decisions, especially during optimization iterations. When it comes to ML model generation, utilizing archived data is a viable approach. However, during the application phase, these models may still generate audacious decisions. Even when trained with live data, the risk remains. As far as I know, leveraging Channel Access security configuration is a sound strategy to manage PV write permissions at a granular level, covering individuals, groups, and workstations. This level of control ensures that the ML code's
write permissions can be finely tuned. I’m still wondering is this way totally secure? Absolutely, incorporating the machine protection system as the primary safeguard on the device side is crucial. Your valuable insights/experience on this subject are greatly appreciated. Thanks, Tong -- Tong Zhang, Ph.D. (he/him) Controls Physicist Facility for Rare Isotope Beams, Michigan State University |