Experimental Physics and Industrial Control System
> I would argue that this is also mostly valid in C/C++, where
> arrays are always indexed starting with zero.
There are many integer variables/parameters, other than array indexes, that
should never be allowed to be negative, and will cause unexpected behavior
if they are allowed to be negative.
> Furthermore, with regard to the efficiency question (only one
> range-check for upper bound instead of two for upper and lower),
> in C/C++ you are always free to apply the zero-cost type cast
> from signed to unsigned, thereby mapping negative numbers to
> large positive ones, and then range-test only for the upper bound.
Such type casts might not be portable. Also, this isn?t so much of a CPU
efficiency issue as a programming efficiency issue. If it takes time to
write the code to do the check it's less likely to be checked and strange
and mysterious bugs are more likely to find a place to roost.
> CA overhead is (currently) 16 bytes per channel, IIRC, not counting
> lower-level protocol overhead (TCP/IP).
16 bytes per request and 16 bytes per response, but dropping precipitously
in V4...
> If Java was the only language that didn't use unsigned I would be
> less willing to adapt to it. But my impression is that the use of
> unsigned is coming from C/C++ and getting less popular in more recent
> languages. What will be fashionable in 5 years?
I expect to still be using C/C++ 5 years from now not because I am enamored
with the New Jersey school of design, but primarily because most widely
distributed system code, most OS, and the best programming tools are written
in (for) these languages. Yes JAVA is making a big splash, but so is perl,
and python. I understand that most GUI and COBOL programmers are switching
to JAVA, but that isn?t the type of code that I am writing. Programming
language selection is in most shops primarily based on economics of scale I
think. You have to answer this question; what is your industry sector using?
I don?t personally adopt the one language is always best "pure SNOBAL only"
mind set.
> I wonder if we should think of supporting /unbounded/ integers as
> a native type
This certainly could be considered, and ought to be an interesting read when
I get a chance, however I am not personally finding the range of a 32 bit
integer to be a constraint, and the 64 bit train is whistling into the
station. Should we need something bigger we could also hide it in a class I
suppose?
Jeff
> -----Original Message-----
> From: Benjamin Franksen [mailto:[email protected]]
> Sent: Friday, June 17, 2005 3:21 PM
> To: [email protected]
> Subject: Re: Fundamental Types document / unsigned integers
>
> On Friday 17 June 2005 16:18, Ralph Lange wrote:
> > Marty Kraimer wrote:
> > > The only place we see that V3 epics records need unsigned is for
> > > bit masks.
> > > However there is a big problem with the mask fields in the the
> > > mbbXXX records.
> > > The mask field is 32 bits. How do we handle a 64 bit I/O module?
> > > OK with V4 we can make the mask fields be 64 bits.
> > > But how do we handle a 128 digital I/O module?
> > > Also a 16 bit digital I/O module has many unused bits.
> > >
> > > A way to handle this is to make the mask fields an array of octets.
> > > The byte and bit order must still be decided but at least any
> > > multiple of 8 bits can be handled.
> >
> > I see your line of argumentation and I do like the idea of handling
> > all bitfield data as arrays of octets. I guess with that trick we can
> > ship anything around that doesn't need to be used in calculations.
> >
> > But what about real unsigned numbers? Like results of an ADC
> > conversion that maps 0-10V to 0-65535 (not that uncommon)? Do we want
> > to use 32bit integers in all these cases wasting ~50% of the
> > bandwidth on CA?
>
> That's what we call a Milchmädchenrechnung in German.
>
> CA overhead is (currently) 16 bytes per channel, IIRC, not counting
> lower-level protocol overhead (TCP/IP).
>
> For scalar values, this dominates your mentioned 2 bytes increase for
> the payload by a factor of 8. Thus bandwidth increase is rather ~10%
> than your claimed ~50%. If additional properties like timestamp and
> status/severity is requested routinely (as is, for instance, by display
> managers) then the increase drops to a mere ~5%.
>
> For arrays things are different. However, I suspect the vast majority of
> large arrays have floating point elements anyway. Why? Because large
> arrays are typically the result of some IOC-level calculation, rather
> than raw hardware values. This is more a suspicion than a hard fact,
> though, and I stand to be corrected.
>
> The (presumably) few applications that really require large arrays of 16
> or 8 bit unsigned integers can use octets and perform conversion to any
> appropriate number type on the client side. This may be inconvenient
> but I doubt it would be a show-stopper.
>
> > And wonder why writing a high number through a 32bit
> > int to a 16bit DAC yields unpredictable results without the client
> > getting an out-of-range exception? Transport it as an array of octets
> > and reassembling it to an integer (unsigned or not) on both ends?
> >
> > If Java was the only language that didn't use unsigned I would be
> > less willing to adapt to it. But my impression is that the use of
> > unsigned is coming from C/C++ and getting less popular in more recent
> > languages. What will be fashionable in 5 years?
>
> A good point. In fact I know of /no/ advanced high-level programming
> language that supports machine-level unsigned integers, /except/ for
> the sole purpose of interfacing with C/C++ libraries.
>
> And thinking of really high-level languages, I wonder if we should think
> of supporting /unbounded/ integers as a native type. This would solve
> the what-comes-after-the-64-bit question nicely. A good implementation
> is readily available, see GNU MP library at http://www.swox.com/gmp/,
> which is used internally by many advanced languages to implement their
> native bignums. The licence is LGPL, which hopefully isn't going to be
> a problem.
>
> Regarding Jeff's argument as for the advantages of programming with
> numbers that are garanteed to be non-negative. I would argue that this
> is also mostly valid in C/C++, where arrays are always indexed starting
> with zero. Many languages allow upper /and/ lower index bounds to be
> arbitrary (signed) integers, or even any other data type, provided the
> programmer can specify a one-to-one mapping onto a bounded interval of
> integers. Thus, non-negativity seems to be a somewhat arbitrary
> guarantee (why not, for instance, strict positivity?). Furthermore,
> with regard to the efficiency question (only one range-check for upper
> bound instead of two for upper and lower), in C/C++ you are always
> free to apply the zero-cost type cast from signed to unsigned, thereby
> mapping negative numbers to large positive ones, and then range-test
> only for the upper bound. This will fail for /exactly/ the cases where
> the original check failed, as long as you don't rely on the upper half
> of the possible range, something you (Jeff) suggested is to be avoided
> anyway.
>
> Ben
- Replies:
- Re: Fundamental Types document / unsigned integers Benjamin Franksen
- References:
- Re: Fundamental Types document / unsigned integers Benjamin Franksen
- Navigate by Date:
- Prev:
RE: Fundamental Types / Gateway Jeff Hill
- Next:
RE: Fundamental Types document / unsigned integers Jeff Hill
- Index:
2002
2003
2004
<2005>
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
- Navigate by Thread:
- Prev:
RE: Fundamental Types document / unsigned integers Jeff Hill
- Next:
Re: Fundamental Types document / unsigned integers Benjamin Franksen
- Index:
2002
2003
2004
<2005>
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024