Hi Rod,
If everything works correctly in asyn, it sees the EOI and tells
StreamDevice that this is the end of the message via the (eomReason &
ASYN_EOM_END) bit (in opposite to the ASYN_EOM_EOS bit when a terminator
is found). So it should work with an empty terminator.
I guess you know that you can set the terminators individually for each
protocol. So ASCII protocols can use LF and binary protocols "". In
order to get rid of the LF at the end of the message, you have to read
it in the protocol after reading the array. The waveform record reads
data until either NELM elements are read or the message ends. If the
array has variable size, you need to set NELM large enough. When trying
to read the next 4-byte word, StreamDevice will find at some point that
only one byte is left and stop reading. That byte (LF) will stay in the
input and you have to read it explicitly.
However, StreamDevice has one problem with GPIB but I don't know if it
affects you: The asynGpib implementation (at least for vxi11, the only
one I can test) does not allow to read the message in chunks. One has to
know in advance how long the message is and need to provide a
sufficiently large buffer. Therefore, StreamDevice starts with a 64 byte
buffer for GPIB devices. This is enough for most messages but not for
arrays. If the buffer is too small and the message cannot be read in
chunks, asyn should return a asynOverflow error. In that case,
StreamDevice increases the buffer size by doubling it. But this
transmission fails. One of the next transmissions may succeed as soon as
the buffer is sufficiently large.
This problem may be fixed in a future version of asyn or may even have
been fixed without me noticing it. One workaround would be to add a
variable to StreamDevice to specify the initial buffer size.
I have no idea how linux-gpib behaves. I can only test vxi11.
Gruss
Dirk
Rod Nussbaumer wrote:
Hi all.
I've been trying to read waveforms from a Tektronix TDS8000 scope
using R3.14.11 + StreamDevice (2.4) + asyn (4.13) + linux-gpib using a
NI PCI GPIB interface on an x86 PC platform. I cannot seem to find the
right message termination to satisfy all of the various components.
The scope documentation says it does not use end-of-line terminator
characters, but rather sets EOI on the GPIB to signal end of messages.
Nevertheless, there are LF characters appended to each message, even
binary formatted waveform data. I'm trying to use RIB encoded
(Intel-endian binary) waveform data, 4-byte words.
I'm hoping to read 10 waveforms per second on a periodic scan of an
EPICS waveform record, (500 integer data points).
A simple bit of C code is able to read waveforms at about 80 waveforms
per second in a tight loop. In there, I have used the 'ibeos()'
function to cause my C code to not terminate reads on any character
delimiters, and this seems to have the desired effect. In the
gpib.conf file, I've include the setting:
set-reos = no
There is also the setting...
eos = ___
... which I've set to 0x0, commented out completely, and set to 0x0a,
none of which seems to change anything, and which is in accordance
with my expectation.
Most/some of the settings in gpib.conf seem to correspond to the
capabilities of the asynSetOption() function which I've also used in
the EPICS startup script. There, it does seem to make a difference. If
I set...
asynSetOption("GPIB0",10,0x0c,0)
...then asynTrace shows me that it does two things:
1. terminates its sequential reads on each binary zero byte
2. times out at the end of the read (and sees/reads the trailing LF)
I seem to be stuck between a rock and a hard place: either I use an
end-of-line terminator, which will eventually show up in the binary
data and prematurely terminate the read, or if I try to use the option
to not use end-of-message characters, it will wait until the read
times out, which seems to have the effect of invalidating the record,
and slowing things down a lot.
Any hints on how to proceed, tests to make or requests for further
info are welcomed.
Thanks.
Rod Nussbaumer
ISAC Controls, TRIUMF
Vancouver, Canada.