Please note that my test setup is several CA clients connected to only one CA server, which seems reverse
to the setup “100 connection * 10MB = 1GB” where only one CA client is connected to several CA servers. But I think these two setups should produce almost the same effects / results for testing memory usage by CA TCP connections.
2. “automatically resized up to
max_array_bytes”: I agree with Matej that this is very convenient on the client side. I guess most of us have experienced having to setup EPICS_CA_MAX_ARRAY_BYTES to a bigger number on both IOC side (in
st.cmd) and OPI side (i.e. CSS, EDM) when we deal with array-like records such as compress, waveform,
subArray, aSub, etc. It would be nice to automatically (or semi-automatically) setup EPICS_CA_MAX_ARRAY_BYTES to an appropriate number on the IOC side during
iocInit. I guess it might be feasible and here is what I think:
1) add something like
iterateRecords(calculateMaxArrayBytes, NULL) at
initDatabase() during iocBuild() to calculate the max. memory size (number of bytes) that an array-type record needs. For instance, if all records in the IOC are scalar-type,
we will do nothing with EPICS_CA_MAX_ARRAY_BYTES for this case. If there is a waveform record in the IOC, we can calculate the max. memory size (say
max_array_mem_size) allocated for the waveform record by
dbValueSize(prec->FTVL). Use similar algorithms for any other array-type records to get their max. memory sizes and eventually we can get the max. value of
max_array_mem_size. If (max_array_mem_size * 1.2)>
then we setup EPICS_CA_MAX_ARRAY_BYTES =
max_array_mem_size * 1.2. The coefficient “1.2” (or other value) is for adding up additional bytes for compound data types (i.e. DBR_GR_DOUBLE).
2) the algorithm described above won’t work if the IOC contains non-standard (not included in the base)
records such as records from the synApps package. In this case, we have to manually configure EPICS_CA_MAX_ARRAY_BYTES in the IOC st.cmd file. EPICS_CA_MAX_ARRAY_BYTES is defined before
iocInit() so that iterateRecords(calculateMaxArrayBytes, NULL) just returns, does nothing.
iterateRecords(calculateMaxArrayBytes, NULL) will make EPICS_CA_MAX_ARRAY_BYTES transparent on the IOC side for most users in most cases. But manual configuration of
EPICS_CA_MAX_ARRAY_BYTES will eventually work for all cases.
Sorry for this long message. If most of us think it is worth implementing this kind of semi-automatic
configuration of EPICS_CA_MAX_ARRAY_BYTES on the IOC side, I will give a try when I have extra time.
Rok Sabjan notified me about this thread. Thanks to Lewis for replies.
The old send buffer algorithm was to initialize the send buffer size to max_array_bytes and
automatically resize on demand (there is one send buffer per TCP connection). Not something
one would dare to use on a server, however very convenient on the client side.
However, if a client has a lot of connections there is a lot of memory required when max_array_bytes is large (e.g. 100 connection * 10MB = 1GB!).
Current algorithm starts with an initial size of 1k that can be automatically resized up to max_array_bytes.
This also mimics C++ CA algorithm (that has also evolved over the years).