> From: Marty Kraimer [mailto:email@example.com]
> >Jeff Hill wrote:
> >Hello Marty,
> >Here are my comments.
> >o We can generate C and Java bindings for data
> > access as needed.
> Don't these, or at least the set that reproduces V3
> functionality, need to be defined now?
They will be nearly identical to the C++ interface. For example,
in C++, where there is a pure virtual function in C there will be
a parallel function in C with the same name except that the class
name prefix is added. When there are overloaded C++ functions
then a type name appendix will also be added to the C function
name. And, of course, an additional parameter will need to passed
for the object pointer.
In Java, the interfaces could be nearly identical except that
unsigned types can't be supported. That will not be a problem as
long as the unsigned type's numerical magnitude is convertible to
the destination signed type in Java. If not, an error is
> The problem I have with Data access is that it seems to be a
> solution to a set of undefined requirements.
Note that I have given multiple talks over the past few years
where I specifically link Data Access capabilities to functional
requirements I (and I assume Ralph and others) perceive related
to new capabilities that might be added to EPICS in the areas of
expanded meta-data, message based devices, multi-parameter
synchronization, and data acquisition. We have held design
reviews which you attended, and were specifically devoted to data
access. I did not hear anyone say that it wasn't useful to
implement data access.
> What will the V4 Gateway be? How will it store data?
> I think that answering these two questions will go a long way
> to deciding if and how Data access is a necessary component.
When Jim wrote GDD he was thinking about implementing what the
gateway needs when there is a user extensible meta-data set. The
gateway must be able to define storage at runtime to hold
properties whose structure is defined by some other component in
the system. That level of generality is in GDD, but at an
efficiency cost compared to applications that have a compile time
fixed meta-data set and may be interfaced more directly through
data access. We all understand that the isolation the gateway
provides also comes with costs. Among them is a tendency towards
a bottle neck, a single point of failure, and increased overhead
compared to custom coded applications. Therefore, the runtime
costs associated with GDD look less onerous in a gateway context,
and we must consider that the gateway already has much of the
functionality *it* needs in GDD. That level of overhead isn't
appropriate for many of the CA interfacing agents, but it may be
entirely appropriate for the gateway. If there was funding and
inclination we could asses if the gateway might be revised taking
advantage of the freedom presented by data access to be more
efficient and have fewer tendencies towards memory fragmentation
when arrays are transported. Otherwise, another alternative would
be to write a GDD interface for data access and not make
extensive changes in the gateway.
> >o Data access supports safe conversion between all of the
> >primitive types available in C. If the source is out of range
> >the destination the operation is not performed and an error is
> >returned to the user. This has already been implemented.
> Is this what we want?
> Let me give an example
> An IOC has a field that is an unsigned 32 bit integer.
> A java application asks for the fields as a signed 32 bit
> integer. The java application only uses the field as a
> bit mask.
> Should CA return an error if the value happens to have the high
> bit = 1?
> I say no.
I say yes. No doubts from my perspective. How would the IOC know
what the client is using the data for? This isn't close enough is
good enough environment. We are coding a control system. We don't
want intermittent failures that occur only when the data has
If the bit field is an unsigned number and higher order bits are
set then this would be a very high magnitude unsigned number. If
this exceeds the magnitude of the signed type in the Java client,
and if there is even the slightest possibility that any Java
client might use the result as a signed number, and not as a bit
field, then a disaster will eventually occur. A code that is
sourcing out bit fields should be providing a way to specify the
starting bit and the field length so that bit fields anywhere
within a 32 bit unsigned number can be accessed, and the Java
client upon encountering conversion problems could request the 32
bit field as two 16 bit fields with complete safety and no loss
> For an IOC perhaps we should just not support conversions that
> can cause problems. This is what I proposed in "EPICS
> V4 epicsTypes".
I must admit that I am perplexed by the line of reasoning here.
Data access holds a compromise position allowing conversion to
occur only when the source magnitude is in range for the
destination. You seem to first say that we should allow unsafe
conversion between unsigned and signed types, and then you say
that it should never be allowed. My perplexity occurs because I
don't see how both arguments together (which are opposite) refute
the compromise position? ;-)
> Also supporting UTF-8 is a big issue.
The putChar / getChar interfaces in DA's stringSegment interface
receive and return type "int" respectively. This was my solution
to the UNICODE issue. This isn't incompatible with a UTF-8
encoding which is certainly viable, and should be considered.
Nevertheless, I'm not certain at this point how or if this should
impact the generality of the daString interface (which support
multiple encodings and allows for mappings between them). Needs
> We do NOT want to create our own complete string library
> including the equivalent of the printf and scanf family.
> But what do we need?
Well, no, I'm not real fond of writing string conversions either.
I postponed this issue with data access by placing the string
conversion functionality in the string storage implementation.
Nevertheless, the issue can't be postponed forever, and the power
of two free list array for a contiguous block of string storage
may be somewhat storage inefficient compared to a scheme based on
fixed sized non contiguous string storage blocks. Perhaps these
worries are unfounded, but nevertheless I am concerned. I am
generally paranoid of any interface that requires dynamic
allocation of contiguous blocks that are not of a fixed size. It
would also be nice to just add a fixed sized block to a list when
extending a string. In contrast, the power of two free list array
approach will require increased overhead for copying the entire
string to the new block, and freeing the old block.
I have been weighing this issue of writing our own string to
number issue for awhile - since last August. I notice that there
are currently several string to number conversion codes roosting
in libCom which are quite small. I also had a look at some of the
GNU C standard library codes, and string to number conversion
isn't all that hard actually. Sure, the full general
functionality of scanf and printf are hard, but if the format
controlling properties are coming from data access subordinate
properties, and if we consider that EPICS currently only
dynamically adjusts formatting with precision, then perhaps this
is something that might be done in a reasonable amount of code.
This is certainly a well bounded chunk that could be put out on
tech-talk for volunteers. Therefore, perhaps the code could be
obtained for free?
For review the benefit would be freedom to store strings in
non-contiguous fixed sized blocks. Note that codes such as CA
have more runtime string setup and tear down activities compared
to the current state of the database, but even the database may
run into this issue more often once online add and delete are
> >Struct Lib
> >o This seems to be a subset of the data access interface and
> >associated libraries.
> And perhaps the only subset needed?
Please look at my talks form the Japan EPICS meeting, and at the
EPICS 2010 meeting in SanteFe, and also my response below.
> >o Indexing structure fields using a field name string may be
> >inefficient for high throughput situations?
> Agreed. The example was only to show that applications that do
> not know
> about the structs can still access elementary fields.
> I am envisioning that structs will be the way to make "atomic"
> access to multiple fields.
> To make the atomic access the an application
> will have to work with the struct rather than just
> accessing individual fields.
This is certainly worthy of consideration, but proper decoupling
of sender and receiver data spaces appears to be important for a
tool based approach. Conventional data description compiler based
systems require interfaces of the sender and receiver to be
utterly identical parameter-for-parameter, field-for-field, and
bit-for-bit. The sender and receiver must have the same unique
data structure identifier. If not, no communication is possible.
However, consider for example the requirements for future
implementations of EPICS. In these systems events posted to the
server may have many associated subsystem unique properties.
Clients will rarely need all of them, and there will be many
permutated subsets requested by a range of different clients.
Likewise, clients written in the past should continue to function
if new properties are added to an event. Consider the opposite
situation, where a client is requesting an atomic write of
several properties at once. If the server side agent is modified
to accept a larger number of properties in this interface then
old clients supplying fewer properties should continue to work.
In this situation the server side agent will accept a default
value for any property that isn't supplied.
Furthermore, schemes like XDR require that the data be stored in,
or converted to, a particular contiguous format - a C structure
produced by the XDR compiler. This works fine for a set of simple
scalar properties, but becomes cumbersome for variable length
properties such as strings and arrays. When complex data is
stored in a proprietary format significant overhead can arise.
For example, in limited memory systems it is important to allow
scattered, non-contiguous storage of properties. It must not be
required that random sized dynamically allocated blocks of memory
exist only for the short duration that a complex property is
passed between two different layers in the system. The XDR
approach also tends to be inflexible when it comes to interfacing
with multi-dimensional arrays.
In contrast, Data Access does not enforce a native storage format
and therefore does not suffer from the above limitations, and our
perception is that this is a more flexible, less intrusive, and
better performing approach.
> >o There is no way to introspect the available fields, and
> >values, in an unknown structure (i.e. traverse functionality
> >data access).
> It should be easy to provide introspection for structs. Thus
> generic applications can be written that can access
> arbitrary structs.
Yes, but these ideas seem to propose overlapping functionality
with data access, and to also be influenced by data access which
has been already proposed, design reviewed, and also implemented.
I am getting a vague feeling that anything written in C++ is
presumed, based purely on choice of implementation language, to
be unusable. Hopefully, this is a wrong conclusion.
- Re: EPICS base V4: iocCore database Benjamin Franksen
- RE: EPICS base V4: iocCore database Kenneth Evans, Jr.
- Re: EPICS base V4: iocCore database Benjamin Franksen
- Re: EPICS base V4: iocCore database Marty Kraimer
- Navigate by Date:
Re: [Fwd: RE: EPICS base V4: iocCore database] Benjamin Franksen
Re: EPICS base V4: iocCore database Benjamin Franksen
- Navigate by Thread:
Re: EPICS base V4: iocCore database Marty Kraimer
Re: EPICS base V4: iocCore database Benjamin Franksen