Argonne National Laboratory

Experimental Physics and
Industrial Control System

1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  <20122013  2014  2015  2016  2017  2018  2019  Index 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  <20122013  2014  2015  2016  2017  2018  2019 
<== Date ==> <== Thread ==>

Subject: RE: hdf5 (h5py) anyone?
From: Mark Rivers <rivers@cars.uchicago.edu>
To: "'Emmanuel Mayssat'" <emayssat@yahoo.com>, epics <tech-talk@aps.anl.gov>
Date: Thu, 22 Mar 2012 18:15:17 +0000
> Just out of curiosity, why would you store images in hdf5 files?
> How do you store them? 1 image = 1 huge table with pixel intensity?

Because HDF5 is designed for storing datasets of any dimensions, along with the associated metadata.  More popular image formats (JPEG, TIFF) have very limited capacity for self-describing metadata.  One can add tags to TIFF files, but they are not self-describing, one needs to know the tag number and what is contains.

In an HDF5 file one can store a single image, or an entire video stream.  For the latter one can use the HDF5 "Unlimited" dimension.  Here is an example snippet produced using the "h5dump" utility on an HDF5 file produced with areaDetector:

            DATASET "data" {
               DATATYPE  H5T_STD_U8LE
               DATASPACE  SIMPLE { ( 100, 512, 512 ) / ( H5S_UNLIMITED, 512, 512 ) }
               DATA {
               (0,0,0): 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190,
               (0,0,11): 200, 210, 220, 230, 240, 250, 4, 14, 24, 34, 44, 54,
               (0,0,23): 64, 74, 84, 94, 104, 114, 124, 134, 144, 154, 164,
               (0,0,34): 174, 184, 194, 204, 214, 224, 234, 244, 254, 8, 18,
               (0,0,45): 28, 38, 48, 58, 68, 78, 88, 98, 108, 118, 128, 138,
               (0,0,57): 148, 158, 168, 178, 188, 198, 208, 218, 228, 238,
               (0,0,67): 248, 2, 12, 22, 32, 42, 52, 62, 72, 82, 92, 102,
               (0,0,79): 112, 122, 132, 142, 152, 162, 172, 182, 192, 202,
               (0,0,89): 212, 222, 232, 242, 252, 6, 16, 26, 36, 46, 56, 66,
               (0,0,101): 76, 86, 96, 106, 116, 126, 136, 146, 156, 166, 176,
               (0,0,112): 186, 196, 206, 216, 226, 236, 246, 0, 10, 20, 30,
               (0,0,123): 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140,
               (0,0,134): 150, 160, 170, 180, 190, 200, 210, 220, 230, 240,
               (0,0,144): 250, 4, 14, 24, 34, 44, 54, 64, 74, 84, 94, 104,
               (0,0,156): 114, 124, 134, 144, 154, 164, 174, 184, 194, 204,
               (0,0,166): 214, 224, 234, 244, 254, 8, 18, 28, 38, 48, 58, 68,
               (0,0,178): 78, 88, 98, 108, 118, 128, 138, 148, 158, 168, 178,
               (0,0,189): 188, 198, 208, 218, 228, 238, 248, 2, 12, 22, 32,
               (0,0,200): 42, 52, 62, 72, 82, 92, 102, 112, 122, 132, 142,
               (0,0,211): 152, 162, 172, 182, 192, 202, 212, 222, 232, 242,
               (0,0,221): 252, 6, 16, 26, 36, 46, 56, 66, 76, 86, 96, 106,

That says that this is an HDF5 dataset of type U8LE (unsigned 8-bit, little endian) with dimensions (H5S_UNLIMITED, 512, 512).  The unlimited dimension is the dimension in which the array grows as additional images are added to the file.  The dataset currently contains 100 images, but more can be added efficiently.  We are streaming data into such files at rates above 100 MB/sec.

In this same file we are recording metadata from other EPICS PVs when each image is collected (storage ring current, undulator gap, motor positions, etc.).

> For a project, I worked on some time ago (but dropped due to lack of funding), we were looking at CT-scan type of applications with 
> 8 exposures per rotation angle, 180 deg rotation with 0.5 deg step, i.e. descent size data set. 
> I am wondering if hdf5 could have helped in the reconstruction. Then and still today, we were/are using mar/crayonix software.

Some APS beamlines currently store both the raw image data and the reconstructions in HDF5 files.  It does not "help in the reconstruction" per se, but it helps to keep all of the metadata, including what processing has been done on the images, in a single location that is relatively easy to parse and browse.

I run a tomography beamline, and I use netCDF, rather than HDF5.  netCDF is similar in that is a portable, self-describing binary format, but it is simpler and more restrictive because it is not hierarchical.

> MAR doesn't store image in hdf5 format, but can the areaDetector package convert those files on the fly?

Yes, it can.  MAR does not provide an API to read the image data from the MAR server.  So my areaDetector driver reads back the files that the MAR software writes (typically TIFF files) and then passes those images to the areaDetector plugins.  Those plugins do things like statistics, file writers (including  HDF5 and netCDF), and exporting the images to EPICS waveforms so that Channel Access clients can display the images in real-time.

Mark


From: Emmanuel Mayssat [mailto:emayssat@yahoo.com] 
Sent: Thursday, March 22, 2012 12:28 PM
To: Mark Rivers; epics
Subject: Re: hdf5 (h5py) anyone?

Just out of curiosity, why would you store images in hdf5 files?
How do you store them? 1 image = 1 huge table with pixel intensity?

It must be because it is easier to get statistics, extract region of interest, work with several images (datasets), etc.
Is that correct?

For a project, I worked on some time ago (but dropped due to lack of funding), we were looking at CT-scan type of applications with 8 exposures per rotation angle, 180 deg rotation with 0.5 deg step, i.e. descent size data set. I am wondering if hdf5 could have helped in the reconstruction. Then and still today, we were/are using mar/crayonix software.

MAR doesn't store image in hdf5 format, but can the areaDetector package convert those files on the fly?

--
E


________________________________________
From: Mark Rivers <rivers@cars.uchicago.edu>
To: 'Emmanuel Mayssat' <emayssat@yahoo.com>; epics <tech-talk@aps.anl.gov> 
Sent: Thursday, March 22, 2012 7:32 AM
Subject: RE: hdf5 (h5py) anyone?


The EPICS areaDetector package currently has 2 file writers that produce HDF5 files:
 
NDFileNexus creates NeXus compliant HDF5 files using the NeXus API.  It was written by John Hammonds from the APS.
NDFileHDF5 creates HDF5 files using the native HDF5 API.  It was written by Ulrik Pedersen from Diamond Light Source
 
Mark
 
 



References:
hdf5 (h5py) anyone? Emmanuel Mayssat
RE: hdf5 (h5py) anyone? Mark Rivers
Re: hdf5 (h5py) anyone? Emmanuel Mayssat

Navigate by Date:
Prev: Re: hdf5 (h5py) anyone? Pete R. Jemian
Next: Re: hdf5 (h5py) anyone? Matt Newville
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  <20122013  2014  2015  2016  2017  2018  2019 
Navigate by Thread:
Prev: Re: hdf5 (h5py) anyone? Pete R. Jemian
Next: Re: hdf5 (h5py) anyone? Peter Jemian
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  <20122013  2014  2015  2016  2017  2018  2019 
ANJ, 18 Nov 2013 Valid HTML 4.01! · Home · News · About · Base · Modules · Extensions · Distributions · Download ·
· Search · EPICS V4 · IRMIS · Talk · Bugs · Documents · Links · Licensing ·