Folks,
Larry wrote:
> I think it would be nice to have the ability to transfer
> images through epics, but I don't think it is a good idea to make this
the
> sole means to save images. For a really fast system you probably want
to
> write directly to a local RAID array using a dedicated controller. I
think
> this could be at least 10 x's faster than the 20 Mbytes/s you list.
This is the approach I have taken with the Pilatus 100K detector. The
Pilatus camserver software writes the data to disk as quickly as
possible (over 200 frames/sec). The EPICS software posts a subset of
the images to EPICS so that the operator can see what is going on. It
is also the approach I plan to take with the new model I proposed.
I've made a lot of progress on the proposal I made last week. Thanks to
everyone who has provided ideas.
Here is an outline of the current architecture from the bottom up,
slightly revised from last week, with a progress report on what I've
done:
1. Vendor supplied library or interface to application. This includes
the Prosilica API for their Gigabit Ethernet cameras, the OLE/COM
interface to WinView for Roper Scientific cameras, and the socket
interfaces to the MAR-CCD, MAR-345, and Pilatus detectors.
2. A device-dependent C API on top of the vendor library that implements
methods like setIntegerProperty, setDoubleProperty, getImageData, etc.
If the image data can be accessed via the API then that will be done,
otherwise the image data will be accessed by reading the disk files that
the vendor software creates. The functions in this API are all that
need to be written when supporting a new camera.
This layer only uses EPICS for the libCom functions that make it
platform-independent.
I now have a working definition of the interface for layer 2, and a
working driver for the Prosilica GigE cameras.
3a. A basic asyn driver that implements the standard asyn interfaces
(asynInt32, asynFloat64, etc.) and that calls the API at level 2. This
layer will be device-independent. This driver implements control
functions, and processes callbacks when there is new image data. The
idea is that the layer 2 driver should call this layer for every new
image. This driver implements standard asyn callbacks to serve data to
other EPICS asyn clients.
3b. An asyn driver to serve image data to EPICS. This driver does not
need to be started, in which case the image data will not be served to
EPICS at all. This driver also implements a PV to throttle the image
update rate, and a PV to disable image updates. It also implements an
optional region-of-interest (ROI) and binning, so that EPICS does not
get the full dataset, but gets enough for quality assurance.
I have not written this driver yet.
3c. An asyn driver to compute and serve region-of-interest statistics to
EPICS. Thanks to Brian for suggesting this idea of an ROI server. This
driver does not need to be started, so these calculcations do not need
to be done. The initial ROI driver will handle simple rectangular ROIs
with statistics like max, min, FWHM in each direction, centroid, etc.
It is easy to write new ROI drivers for more complicated geometries or
statistics.
I have not written this driver yet.
4. asyn device support for standard EPICS records. This is already
included in asyn. However, asyn currently only provides an
asynInt32Array interface for image-type data. I will add asynInt8Array
and asynInt16Array interfaces for efficient transmission of 8-bit and
16-bit image data.
5. Separate EPICS databases for layers 3a, 3b, and 3c. The database for
layer 3a is working, I still need to write the databases for 3b and 3c.
6. Optional EPICS databases that implement features unique to a specific
detector.
With this model it is easy to add new parameters and new functions to
support the specific features of new detectors.
7. EPICS SNL programs for coordinating complex tasks like synchronizing
shutters.
Cheers,
Mark
> -----Original Message-----
> From: [email protected]
> [mailto:[email protected]] On Behalf Of l.lurio
> Sent: Tuesday, March 11, 2008 1:17 PM
> To: 'Kate Feng'; 'tieman'
> Cc: [email protected]; [email protected]
> Subject: RE: [APS Beamline_controls] EPICS support for 2-D
> detectors/cameras
>
> Hi,
>
> I think it would be nice to have the ability to transfer
> images through
> epics, but I don't think it is a good idea to make this the
> sole means to
> save images. For a really fast system you probably want to
> write directly
> to a local RAID array using a dedicated controller. I think
> this could be
> at least 10 x's faster than the 20 Mbytes/s you list.
>
> Larry
>
> Laurence Lurio
> Associate Professor
> Department of Physics
> Northern Illinois University
> Phone: 815 753-6492
> Cell: 815 260-4900
> Email: [email protected]
>
> -----Original Message-----
> From: [email protected]
> [mailto:[email protected]] On
> Behalf Of Kate
> Feng
> Sent: Tuesday, March 11, 2008 1:56 PM
> To: tieman
> Cc: [email protected]; [email protected]
> Subject: Re: [APS Beamline_controls] EPICS support for 2-D
> detectors/cameras
>
> tieman wrote:
>
> >
> > I've thought about the standard EPICS IOC route. The Image
> Server was
> > always an application first and a PV server second. I
> still believe
> > there is value in that philosophy but can see the value in going
> > standard EPICS.
> > I don't see a danger in sticking with the Portable Channel Access
> > Server. There's enough code reliant on it now that I don't see it
> > going away. It does not suffer from the 16K barrier if
> built to avoid
> > it but I generally disagree with EPICS as a means to move
> image data.
> > I've already done a small amount of preliminary work in
> developing a
> > new code base using the PCAS but it's nothing that can't be
> thrown away.
> >
> It is always a good idea to constantly seek improvement on EPICS.
> However, I am not unhappy about its CA performance so far. Sometimes,
> its perfromance is coupled with the hardware, software and PC that
> is used in your system.
>
> In Nov. 2007, I was able to "display real-time image" at an average
> of 20 Mbytes/sec (a max. of 22 Mbytes/sec) while the camera
> was transferring the image data at 28 Mbytes/sec to the system
> memory via the 1GHz network. The performance is up by 33 % since
> the paper was published in ICALEPCS07 conference, which was held
> in Oct.. See
> http://accelconf.web.cern.ch/accelconf/ica07/PAPERS/WPPB12.PDF
> The system configuration is mvme5500/firewire/Linux PC.
> The better news is that I do not think I hit the bottleneck yet.
> 1) I could still improve the mvme5500 BSP that I wrote to have higher
> performance.
> 2) The perfromance is estimated to be at least twice higher with the
> mvme6100 support in the "disco" BSP, which I derived
> from the mvme5500 BSP that I wrote. "disoc" BSP supports the
> discovery
> based board such as mvme5500 and mvme6100.
>
> However, the analysis of 1) and 2) is not a high priority task to me
> presently.
> Only the sky is the limit.
>
> Cheers,
> Kate
>
>
>
>
>
>
> _______________________________________________
> APS Beamline_controls mailing list
> post: [email protected]
> request: [email protected]
> http://www.aps.anl.gov/mailman/listinfo/beamline_controls
>
>
- References:
- RE: [APS Beamline_controls] EPICS support for 2-D detectors/cameras l.lurio
- Navigate by Date:
- Prev:
Re: [APS Beamline_controls] EPICS support for 2-D detectors/cameras Kate Feng
- Next:
Re: genSub Matthieu Bec
- Index:
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
<2008>
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
- Navigate by Thread:
- Prev:
Re: [APS Beamline_controls] EPICS support for 2-D detectors/cameras Kate Feng
- Next:
EDM: x-y graph questions David Dudley
- Index:
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
<2008>
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
|