EPICS Controls Argonne National Laboratory

Experimental Physics and
Industrial Control System

1994  1995  1996  1997  1998  1999  <20002001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024  Index 1994  1995  1996  1997  1998  1999  <20002001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
<== Date ==> <== Thread ==>

Subject: Re: Proposals for the Next Generation CA
From: Benjamin Franksen <[email protected]>
To: EPICS Techtalk <[email protected]>
Date: Wed, 13 Dec 2000 21:37:58 +0100
Jeff Hill wrote:
> 
> > The natural association between the fields' names and the context in
> > which they have a specific meaning is completely lost outside the
> > database (in a technoical way, that is). Instead of using it, CA
> > introduces a separate kind of semantic, the application / dbr type,
> > which is independent from the database definition.
> 
> This isolation was introduced to facilitate a tool
> based approach. Our specific intent was that the client side tools
> should not need to be modified when a new record type was added to
> the system.

I don't think that with my approach this would be necessary. Not if one
introduces stronger conventions for the field names. Nor, if one choses
the other solution and the records post a list of (attribute type, field
name) pairs on a successful connection. In both cases the client can
safely ignore any of these potential, offered attribute pvs.

The *client* has the final decision, which pvs to group into a request,
subject to the restriction that these pvs must be provided by the same
server instance.

New record types will therefore *not* break existing clients.

> Moving primitive conversions from inside the server library to inside
> the client library is a goal for future versions. We do not, however,
> plan to place this burden on either the client or server side tools.

Of course not. This will be done by the library. The reason I came up
with the conversion is that I wanted to get rid of dbr types
alltogether, which made it necessary to restrict conversion to the
client side.

> > Eliminate the need (3) for event types:
> >
> > - If a client monitors a strongly coupled group of related pvs,
> >   it may specify which of these pvs shall be triggering an event
> >   and which shall not. For instance, an alarm handler will specify
> >   the corresponding STAT and SEVR as trigger pvs. A display manager
> >   like the dm2k will also specify VAL.
> 
> It is a goal for future versions to allow triggering of monitor
> subscription updates for channel xyz when channel pdq changes.

This would be automatically possible without any additional effort, if
we follow my proposal. The client may combine pvs into a request which
are not all from the same record. (Though consistency is guaranteed by
the server only for those pvs which come from the same record instance.)

> With the approach that Ben suggests the set of associated attributes for a
> particular event type is know only at runtime (not at compile time).

Yes, this is exactly my point. I want to enable the clients to compose
requests in an untyped way at run-time according to

(1) their specific needs, and
(2) the server's capabilities.

> This makes it particularly challenging to come up with an efficient
> run time solution.

It is possible that we will loose a bit of performance here. This is
similar to the decision to use C++ virtual functions and polymorphic
types in contrast to C with normal functions and static types.

See below for additional details on the subject of performance.

> Likewise, the interface between the server tool
> and the server library is further complicated because we must arrange
> for a consistent set of attributes. 

Currently, the really complex things are hidden inside the gdd, that is
used to transport the data in casPV's read, write and postEvent methods.
This greatly simplifies the interface to the server tool. But:

- The server tool's overwritten read and write methods must be able to
handle *any* kind of composite data.

- Whenever the server tool wants to post an event, it must pack *all*
possibly requested attributes into a composite data object before
posting this data package.

In my proposal, the data transported by read, write, and postEvent is
always the atomic value of the pv, no more, no less.

No additional functions are needed. So, in fact, the interface becomes
even simpler. Of course, the server library will have a lot more to do
internally.

> That is, the {time stamp,
> value, alarm status, alarm severity} set must all come form the same
> instance of record processing. 

Yes. When the record posts an event, the server library must check
internally, if this triggers some pending request. If it does, the
server library will immediately (that is, during this same call)
retrieve all the values of all fields of this record, that are requested
by pending requests triggered by this pv.

> Since the server library is discovering
> that an event trigger has occurred, and then at its own (probably lower)
> thread priority deciding to go out and capture the necessary data it
> also becomes more difficult to make certain that the data and the trigger
> are consistent. 

See previous paragraph: it will *not* happen at its own (lower) thread
priority, but inside the post_events() call and thus at record
processing priority. At least in the normal case, in which the request
contains only fields from one record instance, this is all that has to
be done to get all the values in a consistent manner.

Things will be a bit more complicated in case the request contains pvs
of different records. But this will not be the routine case.

> Furthermore, opening independent channels
> for each PV attribute might significantly increase memory and
> CPU consumption by the system.

Yes. This is in fact a problem. It is related to the performance problem
mentioned above.

One possible solution is based on the following observation:

The usual client will be happy with one or at most two tailored request
per pv. It will usually not issue requests for pv-groups with
permanently changing members.

So the client library supplies a connection handle to the client tool
not for single pvs, but for requested groups of pvs (which might contain
only one member). On the back-end, there is really not an actual channel
for each attribute pv, but rather one for the combined request.

That means, the process of building up a connection (channel) has an
additional step of negotiation:

1st step: Client requests to connect to the pv. Server confirms
existence.

2nd step: Client requests a combination of pvs to be grouped into a
group-connection. Server checks the existence of the pvs (only necessary
if not a subset of the suggested ones), links their addresses into
internal structures, and answers with a group-connection acknowledge.

Subsequent requests to get, put or monitor will refer to the
group-connection handle.

> > - Clients are free to specify any desired value as the monitor deadband,
> >   as long as the corresponding pv is triggering and its primitive type
> >   is a number.
> > - On successful connection to a pv, the server may suggest related pvs
> >   that contain suitable deadbands such as the corresponding
> >   MDEL and ADEL fields, if such fields exist.
> 
> Only the server tool can decide on its own when its data
> has changed, and therefore an event should be posted. Otherwise,
> we are forced to poll the data from the server and introduce
> additional overhead and also propagation delays.

Yes, only the server tool knows when it's data has changed.
And yes, polling is out of the question.

Let me explain how I think it can work:

Currently we have different deadbands for log / value events. The
deadband that is actually used is a field of the record, i.e. a related
pv.

What I propose is: In addition to the usual arguments to ca_add_event(),
the client *may* specify some non-standard deadband either directly or
by giving some pv name from which the server should get the deadband.

Then how does the server tool know, when to post an event, since it has
no idea what the actual deadbands are?

The answer is simple: It will signal the server library to eventually
post events whenever it sees fit. If it is a record, this is whenever it
is processed. This happens - regardless of any deadbands - by calling a
function from the server library.

This library function decides whether an event is actually posted
according to the current value of the deadband-pv of each pending
request. To be able to do this, the server library must always cache the
last posted value.

> It is however desirable in future versions to allow the client to
> specify a wider deadband than what is used by the server tool to decide
> if its PV has changed. This is already on the list.

And would be included in what I proposed without additional effort.

> > (f) Writing server tools will be simplified drastically (this is one of
> > the reasons I began to think about this).
> 
> Quite a bit of effort went into designing a server tool interface
> which would be both easy to understand and sufficiently general
> so that EPICS protocols could be grafted onto almost any system. You
> need only implement a PV existence test, a PV factory, read and write
> routines for each of your PV types, and then post events when your
> PVs change. 

Yes, but this does not satisfy, for instance, medm.

You say "post events when your PVs change", as if posting the value
itself would be of any use, without posting every possible attribute as
well.

See my answer to Pete for more details on this point. The possibilities
for new clients would be amazing.

> But perhaps this over simplifies the task. If what you want is a perfect
> emulation of the EPICS function block database, then there is of course
> quite a bit that you must implement, and my feedback from users
> has uniformly implicated GDD as a source of some confusion.
> 
> I suspect that you desire to eliminate PV attributes from the interface
> may result from frustration with GDD.

Yes, but this is only part of the story. I am aware of the efforts to
replace gdd with something better, and I greatly appreciate it. But:

The reason why make all these proposals is exactly because I saw that my
problems were *not* primarily caused by the interface definition, but
instead by the too statically typed CA protocol definition itself.

Ben


Replies:
Re: Proposals for the Next Generation CA Ralph Lange
References:
RE: Proposals for the Next Generation CA Jeff Hill

Navigate by Date:
Prev: reading arrays with dbr_ctrl_xxx from CA-Server Jens Eden
Next: Re: reading arrays with dbr_ctrl_xxx from CA-Server Benjamin Franksen
Index: 1994  1995  1996  1997  1998  1999  <20002001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
Navigate by Thread:
Prev: RE: Proposals for the Next Generation CA Jeff Hill
Next: Re: Proposals for the Next Generation CA Ralph Lange
Index: 1994  1995  1996  1997  1998  1999  <20002001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
ANJ, 10 Aug 2010 Valid HTML 4.01! · Home · News · About · Base · Modules · Extensions · Distributions · Download ·
· Search · EPICS V4 · IRMIS · Talk · Bugs · Documents · Links · Licensing ·