Experimental Physics and Industrial Control System
And what is more important is that in the good old days beer was only
one pound a pint!, and my hair was not grey!
On Tue, 2012-06-26 at 21:04 +0000, Hill, Jeff wrote:
> > i dont remember this being necessary in the good old days :)
>
> So it's true that, in the good old days, after returning
> from flush we had confidence that the message was at least
> in the clutches of the IP kernel whereas now one only knows
> that the client library has accepted the message, and is in
> the process of delivering it. That’s slightly different
> behavior, but this new behavior is arguably backwards compatible
> considering that in the past there were many additional hops
> remaining such as delivery to the server's ip kernel, delivery to
> the server, delivery of the put request to the database,
> and completion of asynchronous record processing in the database.
>
> The bottom line is that the only way to know for certain
> exactly when a put request has completed is to receive a
> callback resulting from using a put callback request.
>
> Jeff
>
>
> > -----Original Message-----
> > From: [email protected] [mailto:[email protected]]
> > On Behalf Of Hill, Jeff
> > Sent: Tuesday, June 26, 2012 12:23 PM
> > To: steve hunt
> > Cc: [email protected]
> > Subject: RE: pend_io
> >
> > > i dont remember this being necessary in the good old days :)
> >
> > o Even in the good old days it was very OS dependent what
> > might happen if the ca client code exited w/o giving the
> > library a chance to close the sockets, and there was more
> > variability back then. As I recall the SSC even had the
> > record for the largest number of UNIX variations running :-)
> >
> > o In the good old days we didn’t have opportunities for
> > multiple cores to work concurrently sending and receiving
> > messages to multiple sockets. This is a particularly important
> > point because, unlike in the past, we can now have N threads
> > all blocking for delivery in socket send concurrently. In the
> > past, inside of the ca_flush_io call, we had a for loop around
> > the N calls to socket send (which were enforced to complete
> > sequentially).
> >
> > o In the good old days we didn’t have opportunities for
> > for the application to work concurrently at the same time
> > that the library is sending and receive messages to multiple
> > sockets.
> >
> > o In the good old days we didn’t have opportunities for
> > for preemptive callback (from an auxiliary thread) to a
> > multithreaded application.
> >
> > Jeff
> >
> > > -----Original Message-----
> > > From: steve hunt [mailto:[email protected]]
> > > Sent: Tuesday, June 26, 2012 11:22 AM
> > > To: Hill, Jeff
> > > Cc: [email protected]
> > > Subject: Re: pend_io
> > >
> > > thanks Jeff,
> > > perhaps the next release could have the caclient makebase app
> > > following this advice?
> > >
> > > by the way, i dont remember this being necessary in the good old days :)
> > >
> > >
> > > cheers
> > > Steve
> > >
> > >
> > > Sent from my iPhone
> > >
> > > On 26 Jun 2012, at 19:12, "Hill, Jeff" <[email protected]> wrote:
> > >
> > > > Hi Steve,
> > > >
> > > >> I changed caget to caput :)
> > > >>
> > > >> The value was not written to my ioc
> > > >
> > > >> int main (int argc, char *argv[])
> > > >> {
> > > >> double data=86;
> > > >> chid mychid;
> > > >> SEVCHK(ca_create_channel(argv[1], NULL, NULL, 10,
> > > >> &mychid),"failure");
> > > >> SEVCHK(ca_pend_io(10.0),"failure");
> > > >> SEVCHK(ca_array_put(DBR_DOUBLE, 1, mychid, &data),"failure");
> > > >> SEVCHK(ca_pend_io(10.0),"failure");
> > > >> }
> > > >
> > > > It's true that ca_pend_io causes an implicit flush to occur (i.e the
> > > > equivalent of a call to ca_flush_io).
> > > >
> > > > A call to ca_flush_io is analogous to placing a letter in the mail
> > > > box.
> > > > We know now that the problem of delivery is out of our hands and has
> > > > been
> > > > entrusted to the postal service. So far so good, but the problem in
> > > > the above program is that a bomb has been dropped on our local postal
> > > > service before the letter can been delivered.
> > > >
> > > > When your program exits from main without calling ca_context_destroy
> > > > then the ca client library's send threads are very ungracefully
> > > > terminated before they have been given a chance to deliver any
> > > > message that they might have in their work queues to TCP. Furthermore,
> > > > it is very OS dependent what might happen to any undelivered TCP
> > > > messages
> > > > lingering in the local IP kernel if you exit from main without given
> > > > the
> > > > CA client library an opportunity to close its TCP sockets; we end up
> > > > with undefined behavior.
> > > >
> > > > In contrast, if the program calls ca_context_destroy before exiting
> > > > from main then the CA client library will gracefully shut down its
> > > > worker threads delivering all outstanding messages to the IP kernel
> > > > before they exit, and also gracefully close all of its sockets;
> > > > now we expect defined behavior.
> > > >
> > > > There is a (very brief) discussion of this issue in the
> > > > troubleshooting
> > > > section of the CA reference manual - which perhaps could be expanded.
> > > >
> > > > The bottom line is if you shouldn’t terminate the local postal servi
> > > > ce
> > > > unless you provide also some additional funding for a close out :-)
> > > >
> > > > Jeff
> > > >
> > > >> -----Original Message-----
> > > >> From: [email protected] [mailto:core-talk-
> > > >> [email protected]]
> > > >> On Behalf Of Steve Hunt
> > > >> Sent: Tuesday, June 26, 2012 7:49 AM
> > > >> To: [email protected]
> > > >> Subject: pend_io
> > > >>
> > > >> I have a problem running a trivial (too trivial it seems) ca client
> > > >> program.
> > > >>
> > > >> I fact I just generated the ca example using makeBaseApp.
> > > >>
> > > >> I changed caget to caput :)
> > > >>
> > > >> The value was not written to my ioc
> > > >>
> > > >> I am running 3.14.12.2, on linux_x86_64 but the behavior seems the
> > > >> same
> > > >> in older versions (at least 3.14.9(
> > > >>
> > > >> I am sending this to core-talk rather than tech-talk as this seems
> > > >> to be
> > > >> a real ca problem
> > > >>
> > > >>
> > > >> ###
> > > >> ###
> > > >> ####################################################################
> > > >> #######3
> > > >> #include <stddef.h>
> > > >> #include <stdlib.h>
> > > >> #include <stdio.h>
> > > >> #include <string.h>
> > > >>
> > > >> #include <cadef.h>
> > > >>
> > > >> int main (int argc, char *argv[])
> > > >> {
> > > >> double data=86;
> > > >> chid mychid;
> > > >>
> > > >> SEVCHK(ca_create_channel(argv[1], NULL, NULL, 10,
> > > >> &mychid),"failure");
> > > >> SEVCHK(ca_pend_io(10.0),"failure");
> > > >>
> > > >> SEVCHK(ca_array_put(DBR_DOUBLE, 1, mychid, &data),"failure");
> > > >> SEVCHK(ca_pend_io(10.0),"failure");
> > > >> // SEVCHK(ca_array_get(DBR_DOUBLE, 1, mychid, &data),"failure");
> > > >> // SEVCHK(ca_pend_io(10.0),"failure");
> > > >> // SEVCHK(ca_flush_io(),"failure");
> > > >> // SEVCHK(ca_poll(),"failure");
> > > >>
> > > >> // sleep(10);
> > > >> // ca_context_destroy();
> > > >>
> > > >> // return result;
> > > >> }
> > > >>
> > > >>
> > > >> When I un-comment ca_poll, ca_context_destroy, ca_get (plus its
> > > >> pend_io), or sleep!!! it works!!!
> > > >>
> > > >> so a ca_poll, a ca_context_destroy, or a ca_pend_io AFTER a ca_get,
> > > >> seems to flush the send buffer - but just ca_pend_io, nor ca_flush_io
> > > >> seem not to do so.
> > > >>
> > > >> At least the client example generated from makebaseApp should have a
> > > >> ca_put added, and made to work without the ca_get, but perhaps this
> > > >> could be considered a bug.
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >
>
- References:
- pend_io Steve Hunt
- RE: pend_io Hill, Jeff
- Re: pend_io steve hunt
- RE: pend_io Hill, Jeff
- RE: pend_io Hill, Jeff
- Navigate by Date:
- Prev:
RE: pend_io Hill, Jeff
- Next:
Re: pend_io Steve Hunt
- Index:
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
<2012>
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
- Navigate by Thread:
- Prev:
RE: pend_io Hill, Jeff
- Next:
[Merge] lp:~ralph-lange/epics-base/thread-hooks into lp:epics-base Ralph Lange
- Index:
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
<2012>
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024