Experimental Physics and Industrial Control System
> Nevertheless, I can show that interfacing at the lowest level will be
> difficult either.
I meant to say...
Nevertheless, I can show that interfacing at the lowest level will not be
significantly harder. Recall, that with data access we interface a
particular class once and then we can reuse this code in many different
situations.
> -----Original Message-----
> From: Jeff Hill [mailto:[email protected]]
> Sent: Thursday, June 23, 2005 4:46 PM
> To: 'Marty Kraimer'; 'EPICS Core Talk'
> Subject: RE: V4 design issue: Should primitive data types have well
> defined precisions?
>
>
> > However, the request to show an hello world example really made me
> > start wondering if this was a good decision.
>
> There will always be higher level layers and built-in data types already
> interfaced via data access. Therefore "caPut(chan,"Hello World")" and
> "caGet(chan,anInstanceOfGraphicDouble)" will be possible.
>
> Nevertheless, I can show that interfacing at the lowest level will be
> difficult either. Of course that will need to be demonstrated, and I will
> need to provide some examples - working on that.
>
> > For primitive data types (int, short, float, etc) the data type for the
> > two propertyCatalogs do not have to be the same. dataAccess provides
> > conversions between the primitive data types.
>
> And also range incompatibility detection
>
> > dataAccess does not define the primitive types. this is left to the
> > implementation. For the existing C++ implementation the primitive types
> > are: char, signed char, unsigned char, short, unsigned short, long,
> > unsigned long, float, and double. The precision of each of these types
> > is not specified by dataAccess.
>
> Interfaces for more complex types such as time stamps, multi state
> (enumerated), string, and subordinate catalogs are also supported.
> Multidimensional arrays of the above are also supported.
>
> > It is my belief that users's will not want to create property catalogs
> > for everything they want to transfer.
> > What I think will happen is that "convenience" layers will be built on
> > top of dataAccess.
>
> No doubt that interfaces for all of the commonly used property sets will
> likely be provided in libraries. That's really easy for users.
> Nevertheless,
> it won't be difficult to interface new data types with new permutated sets
> of properties.
>
> > Unless standard convenience layers are created many non-compatible
> > layers will be created.
>
> Recall that incompatibility will occur *only* if we don't have agreement
> between client and server on the following three issues. These issues are
> IMHO orthogonal with what convenience layers might exist in the system.
>
> 1) The client and server must agree on what minimum set of supported
> properties for a particular class of process variable.
>
> 2) The client and server must agree on what the purpose is for each
> property
> supported by the server for a particular class of process variable.
>
> 3) The client and server must agree on what maximum dynamic range is
> allowed
> for each property supported by the server for a particular class of
> process
> variable.
>
> As I recall, we decided at the last meeting that Bob would volunteer to
> make
> these standards based on his exposure to a range of different projects.
>
> >
> > Since dataAccess does not define primitive data types, application code
> > has no way to guarantee precision for data without some conventions on
> > top of dataAccess. Thus if the application uses the type long it does
> > not know if this is 32 bits or if it is 64 bits. For network
> > applications it certainly seems desirable to have a way to guarantee
> > precisions.
>
> We have to define standard maximum dynamic ranges for each property for a
> particular class of process variable. This is how we ensure compatibility.
>
> >
> > At each step in this transfer data conversions may be performed. For
> > example the data might start as a double, be converted to an integer,
> > and than back to a double. dataAccess itself does not provide any way to
> > know.
>
> Not true in a CA context. CA will transfer data between the primitive
> types
> of the server and the client. There will be no 3rd intermediate type.
>
> >
> > Thus we could look at the transfer as A sending well structured data
> > into a cloud and B receiving well structured data from the cloud.
> > Neither side knows what data transformation were made inside the cloud.
> >
>
> Not true. The client knows that CA will not transform away information if
> it
> chooses a large enough primitive type to hold the standard dynamic range
> of
> a standard property for a particular class of PV.
>
> HTH
>
> PS: Will dataAccess result in some increase in overhead? When
> PropertyCatalog::find() is used the answer is, most definitely, yes.
> However, consider that when we started doing this we were running on 20
> MHz
> computers. Nowadays, new processors are hitting 3GHz. I have always been
> critical of new OS that runs slower on new processors. Nevertheless, one
> has
> to evolve if you don't want to be an evolutionary end path. Considering
> the
> two orders of magnitude increase in processing power I think that some
> well
> bounded additional overhead is warranted considering the far reaching
> increases in flexibility (IMHO).
>
> Jeff
>
> > -----Original Message-----
> > From: Marty Kraimer [mailto:[email protected]]
> > Sent: Thursday, June 23, 2005 12:54 PM
> > To: EPICS Core Talk
> > Subject: Re: V4 design issue: Should primitive data types have well
> > defined precisions?
> >
> >
> >
> > Ralph Lange wrote:
> >
> > > Dalesio, Leo `Bob` wrote:
> > >
> > >>> From a functional point of view - the DA approach gives you total
> > >>> flexibility.
> > >>
> > >> What does it do to the complexity of the implementation and the
> > >> performance? Does it have an impact on the server? Or just the client
> > >> side?
> > >>
> > >>
> > > I'm amused to see that we start discussing basic requirements and
> > > properties of an introspective data interface again. I thought this
> > > discussion was held and ended five years ago. Well....
> >
> >
> > Sounds like we should have had more design reviews about dataAccess over
> > the last five years.
> >
> > As far as I know, before the meeting at SLAC (April 2005), no decision
> > was made that dataAccess would be an integral part of epics. I think we
> > did agree that:
> > 1) performance tests would be done.
> > 2) then we would decide if dataAccess would be used in the portable CA
> > server
> > 3) then we would decide if iocCore should use the portable server
> > instead of rsrv.
> >
> > If either 2) or 3) was decided, I am not aware of it.
> >
> > At the April meeting at SLAC it did seem to be decided that we would use
> > dataAccess as an integral part of epicsV4. However, the request to show
> > an hello world example really made me start wondering if this was a good
> > decision.
> >
> > I think the following is a correct description of the main features of
> > dataAccess and the V4 CA client interface.
> >
> > VERY BRIEF DESCRIPTION
> >
> > dataAccess is a way to transfer data between two data sources.
> > Each source implements a propertyCatalog for accessing it's data.
> > propertyIds identify data that the two sources have in common.
> > Only data with common propertyIds is transfered.
> >
> > END VERY BRIEF DESCRIPTION
> >
> >
> > BRIEF DESCRIPTION OF dataAccess
> >
> > Any kind of data can have a propertyId associated with it, i.e.
> > primitive data, a string, an array, or a set of other propertyIds.
> >
> > A data repository implements a propertyCatalog via which it's data can
> > be read or written. A propertyCatalog provides access to data for a set
> > of propertyIds.
> >
> > Code that wants to write data to the data repository implements a
> > propertyCatalog for the data it provides. The writer than calls
> > something, e.g. CA client code, that can transfer data to the data
> > repository. The writer's propertyCatalog is used to get data from the
> > writer. The data repository's propertyCatalog is used to modify the data
> > in the repository.
> >
> > Code the wants to read data from the data repository implements a
> > propertyCatalog for a place to put data received from the data
> > repository and then calls code to get the data.The data repository's
> > propertyCatalog is used to get data from the repository and the reader's
> > propertyCatalog is used to give the data to the reader.
> >
> > If the reader/writer and the data repository are not in the same address
> > space, network communications are used to transmit the data. Thus
> > intermediate data repositories , which may just be network buffers, are
> > involved. This is transparent to the reader/writer and the data
> repository.
> >
> > The propertyCatalog provided by the reader/writer and the
> > propertyCatalog supplied by the data repository do not have to match.
> > Only data with idential propertyIds are transfered between the
> > propertyCatalogs.
> >
> > For primitive data types (int, short, float, etc) the data type for the
> > two propertyCatalogs do not have to be the same. dataAccess provides
> > conversions between the primitive data types.
> >
> > dataAccess does not define the primitive types. this is left to the
> > implementation. For the existing C++ implementation the primitive types
> > are: char, signed char, unsigned char, short, unsigned short, long,
> > unsigned long, float, and double. The precision of each of these types
> > is not specified by dataAccess.
> >
> > In order to transmit data over the network, a set of primitive data
> > types must be defined precisely. For example the number of bits in each
> > supported integer type must be defined. From the viewpoint of dataAccess
> > only the network layer needs to know this representation. the code that
> > interfaces with the network layer does not need to know this detail.
> >
> > END BRIEF DESCRIPTION.
> >
> > The rest of this message is comments.
> >
> > In theory a client could implement any propertyCatalog it wants and a
> > data source could implement any propertycatalog it wants. dataAccess can
> > be used to pass data between the two data stores. Only data with
> > matching propertyIds is transfered.
> >
> > It is my belief that users's will not want to create property catalogs
> > for everything they want to transfer.
> > What I think will happen is that "convenience" layers will be built on
> > top of dataAccess.
> > Unless standard convenience layers are created many non-compatible
> > layers will be created.
> >
> > dataAccess does not define basic primitive data types such as int16,
> > int32, int64. This means that unless something else besides dataAccess
> > defines such types two data sources have no way to guranteee that their
> > primitive data types are compatiblle. In fact for the exact same
> > propertyId one source may store the data as a, int32 and the other side
> > as a float64. With dataAccess alone they have no way of knowing except
> > by some other means such as conventions about propertyIds.
> >
> > Since dataAccess does not define primitive data types, application code
> > has no way to guarantee precision for data without some conventions on
> > top of dataAccess. Thus if the application uses the type long it does
> > not know if this is 32 bits or if it is 64 bits. For network
> > applications it certainly seems desirable to have a way to guarantee
> > precisions.
> >
> > Let me give another way of looking at dataAccess.
> >
> > Data is transfered from A to B via the following:
> >
> > A has the data in some structured form it understands. It creates a
> > propertyCatalog for accessing the data.
> > B wants the data in some structured form it understands. It creates a
> > propertyCatalog that can access the data.
> >
> > Some code uses the propertyCatalog, possibly passes the data through
> > other intermediate data repositories such as network buffers and
> > gateways, and finally some code uses the propertyCatalog supplied by B
> > to give the data to B.
> >
> > At each step in this transfer data conversions may be performed. For
> > example the data might start as a double, be converted to an integer,
> > and than back to a double. dataAccess itself does not provide any way to
> > know.
> >
> > Thus we could look at the transfer as A sending well structured data
> > into a cloud and B receiving well structured data from the cloud.
> > Neither side knows what data transformation were made inside the cloud.
> >
> >
> >
> >
> >
> >
> >
- References:
- RE: V4 design issue: Should primitive data types have well defined precisions? Jeff Hill
- Navigate by Date:
- Prev:
RE: V4 design issue: Should primitive data types have well defined precisions? Jeff Hill
- Next:
EPICS V4: Record Processing Benjamin Franksen
- Index:
2002
2003
2004
<2005>
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
- Navigate by Thread:
- Prev:
RE: V4 design issue: Should primitive data types have well defined precisions? Jeff Hill
- Next:
Re: V4 design issue: Should primitive data types have well defined precisions? Ralph Lange
- Index:
2002
2003
2004
<2005>
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024