My 2cts...
What you describe seems to be pretty much the container equivalent of system packages that install IOC binaries. (In addition to the IOC binary, the image will need to provide DBD information for the compiled-in modules.)
I would expect issues and maybe limitations roughly in the same areas, then, especially regarding usability for other facilities. A lot depends on how exactly you define "class of device".
Some roughly sorted thoughts:
- Managing the combinations of EPICS Support modules that are tested together is an issue. The more variety you allow, the more complicated to maintain/support users.
- Are the provided IOC binaries extendable at run time? Device Support libraries are almost plug-ins. Still, creating the IOC binary needs code to be generated and linked in. A true plug-in mechanism that doesn't need recompiling is possible (see E3), but adds risk.
- Building a stack of images (each one with one more Device Support and the resulting IOC binary/DBD) is possible, but fixes the set of modules and their order.
- Subroutine records? Sequencer state machines? Server-side filters? Anything that adds code will likely need its own IOC image ... these won't be very generic, anymore.
Maybe a sample Containerfile showing how to use the Generic IOC image to build a (more) specific IOC image on top would be worthwhile.
I generally like the idea, for sure.
But with the amount of code that can be generated, added and needs to be linked in... "viable for use in any facility" might be challenging.
This is comparable to the binary distribution for the OPC UA Device support: that started with distributing IOC binaries, but it didn't work. ( "I need iocStats." "I need the <whatever> record." "I need QSRV/pvAccess." "How do I add a state machine?"...) The current way of distributing shared-library plus DBD leaves compiling the final IOC binary to the users, allowing them to add their specific code.
Cheers,
~Ralph