Hi Michael,
On Friday 13 November 2009 07:40:50 Davidsaver, Michael wrote:
>
> While I would agree that run-time is too late, I think that link-time is
> also too late. Missing symbols are not a very friendly way to find out.
> Compile time is better since more informative error messages can be given
> with the #error directive.
>
> I would make 2 changes. First, osi/os/default/devLib*.c would contain a
> stub implementation to give something to link against. Second, create a
> header called osi/os/*/devLibConfig.h which would #define HAVE_DEVLIBVME
> and HAVE_DEVLIBPCI. Trying to include 'devLibPCI.h' when HAVE_DEVLIBPCI is
> not defined would be an #error.
I agree that generating errors at compile-time is more friendly, but I don't
know how to ask a compiler to predict the future. I don't believe the
compiler can ever solve my main issue: I don't want to have two different
installations of Base whose only difference is whether HAVE_DEVLIBVME is
defined for any particular target/OS, and nor do I want to have two targets
that differ only by that setting.
Currently none of the APS Accelerator IOCs use the PCI to VME interface, so we
would have no reason to define HAVE_DEVLIBVME in our build of Base for the
linux-x86 target. However if one of our customers asks us to create a system
where it makes sense to use that board on a linux-x86 box, I want our
applications engineers to be able to download the relevant driver module,
build it, and link the result into a new IOC image that is built against our
standard installation of Base. Link time is the earliest moment when we have
all of the information about what a specific IOC's configuration is going to
be, so the only way I can see to get the flexibility I need is for the linker
to be the tool that discovers whether the requested configuration is valid or
not. If you have an alternative that will meet the above requirements though
I'll be happy to hear about it.
> I think what you mean is that you don't want a single global configuration
> switch. You are correct that this wouldn't work.
But isn't your osi/os/*/devLibConfig.h header just such a global config switch
which happens to be OS-specific? There is currently no PCI to VME interface
for solaris-x86 or solaris-sparc, so the osi/os/Solaris/devLibConfig.h file
would not #define HAVE_DEVLIBVME based on current knowledge. There's nothing
that fundamentally prevents someone from creating such a driver, but in order
to use it they would have to recompile Base after editing that file.
> > I don't follow your statement that "there is only one plugin for it."
> > Both RTEMS and vxWorks provide plug-ins for the interface. The issue
> > that I have with it is that a number of routines were added to devLib
> > that don't go through the function table, duplicating existing
> > functionality in some cases.
>
> Yes, they each provide _A_ plugin. There is _A_ plugin for Linux. The
> plugin functionality is not being used.
It's not being used dynamically (i.e. being set at runtime), but it *is* used
statically by the linker to ensure that IOCs that link to the Linux plugin can
use devLib's VME functionality, while IOCs that don't can't.
I would want to apply this same argument to the devLib PCI functionality too
BTW: We don't need to (or want to have to) link with a user-mode PCIbus
driver to build any of our existing linux IOCs, but I would like an IOC that
needs to talk directly to a PCI module to be able to pull in the appropriate
driver without having to rebuild Base.
> > I like Ralph's idea, which also gives you more flexibility to change the
> > API if you need to.
>
> The question was: Should devLibVME handle multiple VME bridges? This
> sounds like a yes.
I have to admit I don't see it being used very much, so I'm probably on the
fence about whether to include the capability or not. I guess the problem
with Ralph's idea is that it wouldn't be possible to use an old-style driver
for any VME board that is found on the secondary bus, so maybe it doesn't
really help that much in practice and the answer should be no?
> If so then the above only applies to the purposed new PCI part of devLib.
> The VME part would still use a pointer table, or rather tables.
The PCI standard already handles multiple buses, but as I said above I don't
think you can implement the IOC build flexibility that I'm looking for without
going through something like a pointer table.
BTW, the way I would recommend controlling the plugin would be through a
registrar() entry in a DBD file. I don't have a problem with requiring that
IOCs using the VME interface should have to explicitly include a devLibVme.dbd
file say, even on vxWorks and RTEMS. It would be normally be included by the
device support for the VME devices it's using rather than directly by the IOC
itself, but it's a valid build configuration parameter and we normally specify
those using DBD files.
- Andrew
--
The best FOSS code is written to be read by other humans -- Harald Welte
- Replies:
- RE: devLib ruminations Davidsaver, Michael
- References:
- devLib ruminations Davidsaver, Michael
- Re: devLib ruminations Andrew Johnson
- RE: devLib ruminations Davidsaver, Michael
- Navigate by Date:
- Prev:
RE: devLib ruminations Davidsaver, Michael
- Next:
RE: devLib ruminations Davidsaver, Michael
- Index:
2002
2003
2004
2005
2006
2007
2008
<2009>
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
- Navigate by Thread:
- Prev:
RE: devLib ruminations Davidsaver, Michael
- Next:
RE: devLib ruminations Davidsaver, Michael
- Index:
2002
2003
2004
2005
2006
2007
2008
<2009>
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
|