EPICS Home

Experimental Physics and Industrial Control System


 
2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  <20172018  2019  2020  2021  2022  2023  2024  Index 2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  <20172018  2019  2020  2021  2022  2023  2024 
<== Date ==> <== Thread ==>

Subject: Re: exporting module versions
From: Andrew Johnson <[email protected]>
To: <[email protected]>
Date: Thu, 2 Nov 2017 12:34:50 -0500
I am with Michael in that I don't want to make promises that are hard to
detect when I've broken them. My assumption has always been that any new
version of an EPICS module will *always* require that it and all
downstream software that relies on it be rebuilt. Doing anything else
results in a non-zero chance that The Machine will go down (and probably
at 2am), so my approach is that it is always cheaper to do unnecessary
rebuilds than it is to cause an outage.

Unless/until we have tooling that can tell us categorically that this
version is 100% binary compatible with the previous version in *all*
respects, I'm not willing to take a chance with someone else's code at
least. Such tooling evidently does not exist (especially for C++), so
Michael's plan to bump the SO version on every release makes the most
sense to me.

To respond to Dirk's comment:
> To opt against compatibility sound to me like: "I don't care if other
> people have problems with my changes." But we are in a community,
> where other people have to live with our changes.

Everyone in this community has other responsibilities. We share because
it helps us all to do so in the long run, but we can't write rules for
others to follow (in most cases they don't work for us). The issue I see
is that making a promise that "this release is ABI-compatible with the
previous one" is very hard to do in practice, whereas "this release is
API-compatible" is easier, and to be honest I suspect most people would
prefer that we spend our time working on new functionality than on the
minutiae of guaranteed compatibility. I don't always have the time
needed to make the harder promise, and I personally don't expect that of
others either.

Even if someone comes up with tools that can compare library symbols,
that wouldn't guarantee semantic compatibility, and changes to C++
inline methods might not affect symbols at all but would still require a
rebuild of anything that calls those methods.

- Andrew


On 11/02/2017 09:45 AM, Torsten Bögershausen wrote:
> 
> On 02/11/17 15:26, Michael Davidsaver wrote:
>> Can we agree on the distinction between *documenting*, *detecting*, and
>> *avoiding* ABI changes?
>>
>> The recipe I give is a simple way to *document* ABI changes.  This is
>> something which I think every module with an API should be doing.
>>
>> As for *avoiding* ABI changes.  IMO this is really only practical when
>> the language, compiler, and underlying libraries are designed with this
>> in mind.  The Linux world has never focused on this (though glibc is
>> getting better).  And C++ makes maintaining a stable ABI quite difficult.
> 
> That is dependent on the definition of "Linux world".
> This are my experiences:
> 
> - The kernel internally is refactored all the time
> - The kernel API/ABI is very strict in binary compatibily
>   (And int stays an int, and is not changed to size_t or ssize_t)
> - The user space library libc is very strict (not to confuse with glibc)
> - The C++ library make life harder (but it should be possible to
>   compile under RHEL 6 and run it under Centos 7 for example)
> 
> In the EPICS world, it is not always clear what is a pure bug-fix and
> what is a feature or improvement. I have seen too many bug fixes that
> fix one thing for one application and (unintentionally) break another one -
> So I would agree with Mark here:
> What does it cost and what do we gain ?
> 
>>
>> In between these two is the question of *detecting* ABI changes.  I
>> haven't had sufficient motivation to do this.  This is why my recipe
>> defaults to using the module version as SONAME.  This is encoding the
>> assumption that every release contains ABI changes (and developers
>> working with VCS checkouts are on there own).
>>
>>
>>
>> On 11/02/2017 08:32 AM, Mark Rivers wrote:
>>> Hi Dirk,
>>>
>>>
>>> I agree that ABI compatibility is desirable in theory.  It is
>>> essential for things like operating system libraries.  But for EPICS
>>> applications a cost/benefit analysis is needed.  I would argue that
>>> for the limited number of libraries needed for CA clients (libCom and
>>> ca) having ABI compatibility is good, we can replace those libraries
>>> without having to relink.
>>>
>>>
>>> But for IOCs is it worth it?  The main benefit for having ABI
>>> compatibility is eliminating the time needed to rebuild the IOC
>>> application.  However, on my Linux system rebuilding everything
>>> except EPICS base and V4 (i.e. synApps, seq, asyn, stream,
>>> areaDetector and over 100 IOCs) takes 113 seconds which is pretty
>>> quick.  Not having ABI compatibility does mean that I need to
>>> maintain the source code trees so that this rebuild is possible.  But
>>> I better do that anyway in case I need to change the configuration of
>>> an IOC or add a new one.  The cost of ABI compatibility is to
>>> constrain the developer from making the types of internal changes we
>>> currently do.  We would also need to make major releases much more
>>> frequently which may intimidate or confuse sites who only care about
>>> API compatibility.
>>>
>>>
>>> I would be curious to know how many sites currently try to simply
>>> replace libraries (e.g. libCom, ca, asyn, seq) for their IOCs, or
>>> would do so if it was better supported, compared to rebuilding from
>>> scratch?
>>>
>>>
>>> In my experience the larger problems are in the lack of ability to
>>> deploy binaries across various OS versions.  I can't build an
>>> application on my Centos 7 development system and deploy it on RHEL
>>> 6.  (I can build an application on Windows 10 and deploy it on XP. 
>>> Microsoft does a better job here.)  The C++ compiler runtime ABIs are
>>> typically not backward compatible from one release of a compiler to
>>> the next.  This is certainly true of Visual Studio, and it is true
>>> for certain versions of g++ as well.
>>>
>>>
>>> Mark
>>>
>>>
>>>
>>> ________________________________
>>> From: [email protected] <[email protected]>
>>> on behalf of Dirk Zimoch <[email protected]>
>>> Sent: Thursday, November 2, 2017 7:42 AM
>>> To: [email protected]
>>> Subject: Re: exporting module versions
>>>
>>>
>>> On 02.11.2017 11:18, Ralph Lange wrote:
>>>> Hi Dirk,
>>>>
>>>> On Thu, Nov 2, 2017 at 10:23 AM, Dirk Zimoch <[email protected]
>>>> <mailto:[email protected]>> wrote:
>>>>
>>>>      [...]  I also opt for binary backward compatibility, so that it is
>>>>      always possible to replace a dynamic library with a newer version
>>>>      without needing to re-build all programs. Forcing a program to
>>>> link
>>>>      only with a very specific library version is, in my opinion, not
>>>>      very maintenance friendly.
>>>>
>>>>
>>>> As you seem to have experience with that: which tools / methodology do
>>>> you suggest to detect and track ABI changes in libraries, especially
>>>> libraries created from C++ sources?
>>>>
>>>> Thanks,
>>>> ~Ralph
>>>>
>>>
>>> I don't know any tools to support compatibility checks, but here is what
>>> I try to do:
>>>
>>> * Never remove API functions (declaring them depreciated is OK)
>>> * Never change the signature of an existing (extern C) function (...in
>>> an incompatible way. Signedness change is often OK. Adding const or
>>> volatile where appropriate is also often OK. Changing 32 bit args (like
>>> int) to potentially 64 bit args (e.g. size_t) is only OK if no 64 bit
>>> was supported previously, but that is already ancient history.)
>>> * Never change the semantics of an existing function (e.g. swap src and
>>> dest parameters in some copy function)
>>> * Never remove, re-order, or change size of the fields of a structure
>>> that is used in an API.
>>> * Add new fields only ever at the end of structure passed to API
>>> functions by reference (and then handle cases gracefully where the
>>> fields don't exist).
>>> * Never remove or re-order virtual methods (the same for non-C++
>>> function tables like in asyn).
>>> * Expose as little as possible in the API. Not all functions are API
>>> functions, not all structures/classes are used in the API. Not all
>>> Macros are part of the API. Keep public and private header files
>>> separate. Do not install private headers. This allows to change any non
>>> API function, class, etc. at any time without breaking the API.
>>> * Do not put private fields/methods in API classes. If private members
>>> are needed, inherit from a API base class without private members. APIs
>>> are not private.
>>>
>>> As it is often not feasible to be so strictly backward compatible, I
>>> suggest (any use in my software) the following rules:
>>>
>>> * A version consists of 3 numbers: major.minor.patch
>>> * Whenever a change is not binary backward compatible, the major number
>>> increases.
>>> * Whenever there are new features, the minor number increases.
>>> * Whenever a bug is fixed without a new feature, the patch number
>>> increases. (A bugfix may be incompatible in so far that the
>>> incompatibility was the actual bug that has been fixed.)
>>> * In linking use the major number in the file name to ensure no
>>> incompatible version can be used.
>>> * Do not use the minor number or patch number in linking in order to
>>> allow upgrading the library.
>>>
>>>
>>> See also how Linux (or GNU) does it: /bin/bash on my computer is linked
>>> to libtinfo.so.5 which is a symbolic link to libtinfo.so.5.7. Note that
>>> is is not linked to version 5.7 but only to version 5. This allows to
>>> upgrade the library to 5.8 but not to 6.0 without having to rebuild the
>>> executable.
>>>
>>> Or: softIoc on my computer is linked to libstdc++.so.6 which is a
>>> symbolic link to libstdc++.so.6.0.13.
>>>
>>>
>>> I have no idea how to automatically check for backward compatibility
>>> when releasing a new version. I can imagine checking function and
>>> structure/class signatures automatically. But how to check for semantic
>>> changes?
>>>
>>>
>>> Dirk
>>>
>>

-- 
Arguing for surveillance because you have nothing to hide is no
different than making the claim, "I don't care about freedom of
speech because I have nothing to say." -- Edward Snowdon

References:
exporting module versions Michael Davidsaver
Re: exporting module versions Andrew Johnson
Re: exporting module versions Dirk Zimoch
Re: exporting module versions Ralph Lange
Re: exporting module versions Dirk Zimoch
Re: exporting module versions Mark Rivers
Re: exporting module versions Michael Davidsaver
Re: exporting module versions Torsten Bögershausen

Navigate by Date:
Prev: Re: exporting module versions Dirk Zimoch
Next: RE: exporting module versions Mark Rivers
Index: 2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  <20172018  2019  2020  2021  2022  2023  2024 
Navigate by Thread:
Prev: Re: exporting module versions Torsten Bögershausen
Next: Re: exporting module versions Dirk Zimoch
Index: 2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  <20172018  2019  2020  2021  2022  2023  2024