Hi Phil,
Ø
Now, I find that the ImageJ viewer plugin is showing "2000x1500 pixels;
32-bit; 12MB". Also, the display is now permanently black.
I can think of 2 issues that could cause the display to be black
-
You have not set EPICS_CA_MAX_ARRAY_BYTES large enough for 12000000 32-bit values, i.e. greater than 48000000 (I’ll explain 32-bits below). In this case you should be seeing error messages in the ImageJ
EPICS_AD_Viewer Plugin window and in the ImageJ Log window.
-
You just need to adjust the brightness and contrast in ImageJ. Use Image/Adjust/Brightness/Contrast or the Control+Shift+C keyboard shortcut.
The fact that ImageJ says the data is 32-bit is due to a limitation of EPICS Channel Access. You set FTVL=USHORT in your waveform record. However, Channel Access does not directly support unsigned 16-bit
or 32-bit integers, it promotes them to the next larger data type (LONG and DOUBLE respectively). That is why ImageJ is receiving 32-bit (LONG) data.
It works fine to use FTVL=USHORT except that it is inefficient, you are passing twice as much data over Channel Access as is necessary.
You can fix this by changing it to FTVL=SHORT. This seems incorrect, because the data are really unsigned, but you are “casting” them to signed. That works in this case because ImageJ always treats 16-bit
data as unsigned. This in turn is very counterintuitive because ImageJ is written in Java which does not even support unsigned integers! But trust me, it works!
I just tested this with the simDetector set to collect UInt16 data. I set the NDStdArrays.template to use either FTVL=USHORT and FTVL=SHORT. If it was USHORT then the ImageJ data is 32-bits, and the values
are correct (in the range 0 to 64K). If it was SHORT the ImageJ data is 16-bit, but still has the correct range of 0 to 64K.
Mark
Apologies - I made a mistake in my earlier post (below).
Instead of scaling the 16-bit data I generate, the 'something' is simply doing a c-style cast to an 8-bit form on the way to the display. So 255 displays as peak white; 256 displays as black.
Phil
On 07/01/2016 16:45, Phil Atkin wrote:
Hi,
My camera only generates uint16 pixels, so I have removed the configuration setting and constructor argument. In the constructor, I set the NDDataType parameter to NDUInt16.
When I debug my code, it seems as though the buffer is being allocated as expected as an unsigned 16-bit buffer and all is well. However, I then notice to my surprise that the ImageJ viewer plugin is showing "2000x1500 pixels;
8-bit; 2.9MB. What's more, something is clearly scaling my data from the 16 bits I generate (0..65535) to the 8 bit range of the display (0..255). Also, I discover that the startup file contains:
dbLoadRecords("NDStdArrays.template", "P=$(PREFIX),R=image1:,PORT=Image1,ADDR=0,TIMEOUT=1,NDARRAY_PORT=$(PORT),TYPE=Int8,FTVL=UCHAR,NELEMENTS=12000000")
That looks wrong, so I change it to the alternative given for 16-bit data in the ADExample script I'm working from:
dbLoadRecords("NDStdArrays.template", "P=$(PREFIX),R=image1:,PORT=Image1,ADDR=0,TIMEOUT=1,NDARRAY_PORT=$(PORT),TYPE=Int16,FTVL=USHORT,NELEMENTS=12000000")
Now, I find that the ImageJ viewer plugin is showing "2000x1500 pixels;
32-bit; 12MB". Also, the display is now permanently black.
I'm confused; can anyone explain, please? Thanks,
Phil
--
<image002.jpg>Pixel Analytics is a limited company registered in England. Company number: 7747526; Registered office: 93A New Road, Haslingfield, Cambridge CB23 1LP
--
<image002.jpg>Pixel Analytics is a limited company registered in England. Company number: 7747526; Registered office: 93A New Road, Haslingfield, Cambridge CB23 1LP