EPICS Controls Argonne National Laboratory

Experimental Physics and
Industrial Control System

2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  <20202021  2022  2023  2024  Index 2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  <20202021  2022  2023  2024 
<== Date ==> <== Thread ==>

Subject: Re: double filters
From: Ben Franksen via Core-talk <core-talk at aps.anl.gov>
To: EPICS core-talk <core-talk at aps.anl.gov>
Date: Mon, 30 Mar 2020 11:15:17 +0200
Am 30.03.20 um 06:08 schrieb Johnson, Andrew N.:
> On Mar 29, 2020, at 9:32 AM, Ben Franksen via Core-talk
> <core-talk at aps.anl.gov <mailto:core-talk at aps.anl.gov>> wrote:
>>
>>> I understand that some JSON parsers won’t handle duplicate keys, is
>>> that what concerns you?
>> I don't expect any concrete difficulties arising from that, though I
>> must say it gives me an uneasy feeling. We are deviating from the
>> "normal" way a JSON object is interpreted without any concrete benefit
>> to justify that.
>
> We’re also requiring that the order of keys be preserved, which some
> JSON parsers can’t handle either. The YAJL library that we include in
> Base supports both for both parsing and generation because it’s a stream
> parser.

Okay, fine.

>> What can a filter do? Generally speaking, it is restricted to change
>> what's in a db_field_log: time, status, severity, field size and type,
>> number of elements, and the data itself; it can also drop the whole
>> thing. Each filter plugin limits itself to changing a subset of these 6
>> items in a specific way and according to the parameters is was given.
>
> In some of its configurations the sync filter stores an incoming
> db_field_log and doesn’t pass it on until the next one arrives, to
> implement “last” filtering for example. I haven’t tried using it with
> arrays so it might be broken there, but for scalar values at least I
> suspect we can chain two filters together to get behaviors that can’t be
> achieved with a single sync filter (give me the last-but-one value
> before the “green” state).

Correct. I wrongly extrapolated from the other filters that I looked at
more closely. What I claimed is true only for stateless filters, whereas
sync is a stateful filter.

So. Assuming there are valid use cases for chaining multiple filters of
the same (stateful) type, does that mean /all/ filters must be written
in a style that allows them to be chained in this way?

If not, then I think we need another hook into the JSON parser that gets
called when a filter key "collision" is detected to allow the plugin to
report failure at this point.

>> I gets worse though. If we add put/write filters to the picture, then
>> the feature is no longer "no cost". In fact, supporting it is not
>> possible without a lot of overhead.
>
> I agree that put filters are probably completely different to
> get/monitor filters, and there will probably be only a few put filters.

But I still think it makes the most sense to use the "same" array filter
for get, monitor, /and/ put.

>> For get or monitor events, it is easy to get an operational intuition
>> for what applying two array filters means: we first get the slice/stride
>> of the original field, then get another slice/stride of the resulting
>> array.
>>
>> It is unclear how to (operationally) reverse this process.
>
> Your example introduced an undefined value to flag elements that
> shouldn’t be written to. I don’t see that as practical, and I don’t
> think you did either.

Not really ;-)

> If we want an array put filter be able to
> overwrite a subset of the elements of an array I would prefer a slightly
> different approach.
>
> I’m also thinking of a put filter where you can give individual array
> index values to be written to, so to create a filter that only
> overwrites specific elements of the target array you might want to
> specify a channel filter as
>
> pvname.{“subset”:[0,1,2,4,6,10,12,16,18]} 
>
> The idea is that there might not be an arithmetic slice/stride pattern
> to the index values (there is a pattern to the above index list though,
> in case anyone wants a little puzzle).

Obviously you are talking about the ranks of the n-th distinct values of
the Bernoulli denominators in the sequence of the denominators of the
Bernoulli numbers... ;-)))

(https://oeis.org/A248614)

Seriously. Such an array filter makes sense for get/monitor, too, and
thus would best be implemented as an extension/alternative form of the
existing array filters.

> To implement this, the filter
> would have to convert the incoming data array into a data structure that
> contains a list of index/value pairs. It then becomes easy to chain
> multiple subset filters, since a downstream filter that gets one of
> these data structures instead of a flat array can just delete entries
> from the list that came from upstream.

However, this doesn't fit into the existing API for filters. The only
thing we can currently pass from one filter to the next is a db_field_log.

So the logical way to allow passing that information is to add a member
"long *pindices" to db_field_log (more specifically: to dbfl_ref). If
this is NULL, the array data is interpreted as before. Otherwise we
expect it to point to an array of indices of the same length as the
pfield member (i.e. no_elements). The field data and indices are then
interpreted as a "sparse array", i.e. as a flattened version of the list
of (index,value) pairs.

This is more memory efficient than a bit vector to indicate definedness
and also compatible to the normal array representation. It is also much
more efficient than using a list of pairs.

The only difficulty is that we cannot readily use the existing array
conversion routines for such spare arrays because they require that
array elements are contiguous. I guess a straight forward solution is to
loop over the indices and call the appropriate dbFastConvert routine on
each iteration.

> Note that I wouldn't expect this to be efficient,

The solution i sketched above is no more inefficient than what the array
filter does now in the case where the increment is greater than 1.

> and I’m still not
> completely sure who really wants/needs single put filters let alone
> chained ones, but I’m happy to explore these concepts in case they do
> prove useful.

Remember the recent discussion about allowing to design databases /
templates in a more modular style. The idea is to be able to gather
information from a multitude of template instances (possible strewn over
multiple IOCs) in a single array. With an array put filter, each
instance can write information (e.g. status) to a designated index. We
can then do further calculations on this array, such as: a summary
status, an average over analog values, etc etc.

> A very interesting idea that I’ve just had for a get and put filter is
> one that transports JSON objects, so a client can atomically read from
> and write to multiple fields of the addressed record through it say. The
> filter would change the native data type of the channel into a long
> string (i.e. a large char array), and allow the user to specify a set of
> fields to be read during get/monitor. The put JSON object would be
> allowed to specify a different set of fields to write to, something like
> this perhaps:
>
> caput -S 'ai.{"json":{"fields":["RVAL","ESLO","EOFF","LINR","VAL"]}}’ \
>   ‘{“LINR”:”LINEAR”, "EGUL":-10, "EGUF":10, "EGU":"Volts”, "PROC”:true}'
>
> This could also allow writing subsets of an array because JSON supports
> null as a first-class value, so:
>
> caput -S ‘wf.{"json":{}}’ ‘{“VAL”:[null, null, 3, 4, 5, null, null]}'
>
> or maybe even something like:
>
> caput -S ‘wf.{"json":{}}’ ‘{“VAL[1:2:9]”:[1, 3, 5, null, 9]}'
>
> but that might be a little tricky to implement.
>
> Before going too far with this we should probably consider whether it
> would make sense for any internal APIs to use pvData objects, and how
> this might interface with QSRV. I suspect not, but the answer isn’t
> obvious to me yet.

I think this is an interesting avenue to explore, but it goes quite a
bit beyond what I want to achieve.

> It doesn’t seem useful to be able to chain anything with this “json” filter.

Yup.

To summarize the discussion so far:

* there are stateful filters, for which chaining the same sort of filter
multiple times cannot be reduced to a single filter

* there may be valid use cases for actually doing that

* we may want to extend the array filter syntax to allow ad-hoc
specification of a subset of indices

* this could be implemented efficiently by adding support for sparse
arrays to db_field_log (i.e add "long *pindices" to dbfl_ref)

* the same trick would allow us to support chaining of array put filters

* for advanced filter plugins db_field_log may still be too limited

* meaning that either we need to add more variants
(dbfl_type_multi_field?), or the ability for a filter plugin to announce
that it does not support chaining (with itself or other filter plugins)

Cheers
Ben
--
...it is impossible to speak of a depoliticized economy as the liberals
do, or of a separation between economic exploitation and political
oppression as the Marxists articulate. The basic distinction, rather, is
between power and creativity...
             -- Jonathan Nitzan and Shimshon Bichler: Capital as Power

________________________________

Helmholtz-Zentrum Berlin für Materialien und Energie GmbH

Mitglied der Hermann von Helmholtz-Gemeinschaft Deutscher Forschungszentren e.V.

Aufsichtsrat: Vorsitzender Dr. Volkmar Dietz, stv. Vorsitzende Dr. Jutta Koch-Unterseher
Geschäftsführung: Prof. Dr. Bernd Rech (Sprecher), Prof. Dr. Jan Lüning, Thomas Frederking

Sitz Berlin, AG Charlottenburg, 89 HRB 5583

Postadresse:
Hahn-Meitner-Platz 1
D-14109 Berlin

Attachment: pEpkey.asc
Description: application/pgp-keys


References:
double filters Ben Franksen via Core-talk
Re: double filters Johnson, Andrew N. via Core-talk
Re: double filters Ben Franksen via Core-talk
Re: double filters Johnson, Andrew N. via Core-talk

Navigate by Date:
Prev: Re: double filters Ralph Lange via Core-talk
Next: Re: double filters Ben Franksen via Core-talk
Index: 2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  <20202021  2022  2023  2024 
Navigate by Thread:
Prev: Re: double filters Ben Franksen via Core-talk
Next: Re: double filters Ben Franksen via Core-talk
Index: 2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  <20202021  2022  2023  2024 
ANJ, 03 Apr 2020 Valid HTML 4.01! · Home · News · About · Base · Modules · Extensions · Distributions · Download ·
· Search · EPICS V4 · IRMIS · Talk · Bugs · Documents · Links · Licensing ·