EPICS Controls Argonne National Laboratory

Experimental Physics and
Industrial Control System

1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  <20182019  2020  2021  2022  2023  2024  Index 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  <20182019  2020  2021  2022  2023  2024 
<== Date ==> <== Thread ==>

Subject: RE: Possible deadlock issue with asyn layer?
From: Mark Rivers <[email protected]>
To: "Davis, Mark" <[email protected]>, "Pearson, Matthew R." <[email protected]>
Cc: "[email protected]" <[email protected]>
Date: Wed, 18 Apr 2018 19:22:40 +0000
Hi Mark,

> At what points in the described sequences are locks acquired and released (the scan lock, the asyn port lock, and any other locks)?

When you say "asyn port lock" are you referring to the lock that is done with asynPortDriver::lock()?  I assume so.  If so, that lock is part of asynPortDriver and is not known to the asynManager software.

      > Are changes to a port's Work Queue protected by the asyn port lock, or does that use a different lock?

That is done by asynManager, and so uses a lock that is part of asynManager, not the lock in asynPortDriver.

      > When processing a new setting, does the asyn port thread still have to call dbScanLock()?

I'm not sure I understand the question.  
  - What code are you referring to when you say "processing a new setting"?  
  - When you say "asyn port thread" you are now talking about a driver created with ASYN_CANBLOCK, so asynManager created a port thread?  I'll assume so.

Let's take a specific example of an output record with asynInt32 device support.  The asyn port thread is the code that executes the processCallbackOutput() function in devAsynInt32.c.  It executes when the pasynManager->queueRequest gets to this queue element.

static void processCallbackOutput(asynUser *pasynUser)
{
    devPvt *pPvt = (devPvt *)pasynUser->userPvt;
    dbCommon *pr = pPvt->pr;
    static const char *functionName="processCallbackOutput";

    pPvt->result.status = pPvt->pint32->write(pPvt->int32Pvt, pPvt->pasynUser,pPvt->result.value);
    pPvt->result.time = pPvt->pasynUser->timestamp;
    pPvt->result.alarmStatus = pPvt->pasynUser->alarmStatus;
    pPvt->result.alarmSeverity = pPvt->pasynUser->alarmSeverity;
    if(pPvt->result.status == asynSuccess) {
        asynPrint(pasynUser, ASYN_TRACEIO_DEVICE,
            "%s %s::%s process value %d\n",pr->name, driverName, functionName,pPvt->result.value);
    } else {
       asynPrint(pasynUser, ASYN_TRACE_ERROR,
           "%s %s::%s process error %s\n",
           pr->name, driverName, functionName, pasynUser->errorMessage);
    }
    if(pr->pact) callbackRequestProcessCallback(&pPvt->processCallback,pr->prio,pr);
}

The first thing it does is call the asynInt32::write function in your driver.  When that returns it sets some variables in devPvt.  Then if the driver is asynchronous (i.e. ASYN_CANBLOCK was set) it calls callbackRequestProcessCallback() to finish record processing.  Note that is does not call dbScanLock because it is not modifying the record.

I think that if you set ASYN_CANBLOCK then you will not have the deadlock.  As I said that it what most drivers are using, and I have not had previous reports of deadlocks in that case.

Mark





-----Original Message-----
From: Davis, Mark [mailto:[email protected]] 
Sent: Wednesday, April 18, 2018 12:22 PM
To: Mark Rivers <[email protected]>; Pearson, Matthew R. <[email protected]>
Cc: [email protected]
Subject: Re: Possible deadlock issue with asyn layer?


I had not considered just how the ASYN_CANBLOCK might affect when the various lock/unlock functions are called.  But now that I am looking at that, it is not clear what the affects are.

Could you fill in the blanks a bit?  The easiest thing might be to use Figure 1 in the asynDriver.html as a reference for the following:

At what points in the described sequences are locks acquired and released (the scan lock, the asyn port lock, and any other locks)?

      Are changes to a port's Work Queue protected by the asyn port lock, or does that use a different lock?

      When processing a new setting, does the asyn port thread still have to call dbScanLock()?

Unless the dbScanLock is NOT needed by the asyn port thread, the only scenario I can think of that would eliminate the problem and would not conflict with the sequence shown in Figure 1 in the asynDriver.html document is this:  The lock that protects the Work Queue is NOT the same as the asyn port lock AND it does NOT have to be held at the same time as the dbScanLock.  This would mean that the asyn port thread did NOT need to simultaneously acquire the same two locks as either the application/client thread or the one launched by the driver to get and process new readings from the device.

However, if the asyn port lock IS used to protect the Work Queue, then the same race condition still exists between any thread that writes to one of the output records with the READBACK tag and the driver-launched thread that periodically gets and processes new readings.


NOTE:  If we specify ASYN_CANBLOCK and it still locks up, that will of course verify that using the blocking-mode setting doesn't help. But if it DOESN'T lock up, that might simply mean that the timing was changed enough that it is much more difficult to reproduce.  At the moment I am just trying to make sure I understand the normal processing sequences so if/when I am able to experiment a bit I will know what experiments to make and what details to look at if/when it locks up.


Mark Davis
NSCL/FRIB


On 4/16/2018 4:26 PM, Mark Rivers wrote:
> I was about to say the same thing as Matt.  If you set ASYN_CANBLOCK 
> then your flow for writes would be changed
>
>> For writes, I am assuming the following sequence (all done by a CAC-client thread in our case):
>>     - write to the output record
>>     - call dbScanLock(pRec)
>>     - call pRec->process(pRec)
> pRec->process sets PACT and returns
> dbScanUnlock(pRec)
>>     - call to the write func specified in the Device Support Entry Table (which, in this case, calls an asyn layer function to process the new setting)
>>     - call function in the asyn layer (which calls it own lock() func to get the asyn-layer lock for a specific instance of the driver)
>>     - call our driver's writeXxx() function
> See what happens if you leave READBACK tag in the database but change your driver to ASYN_CANBLOCK.  I suspect the problem may go away.  Most drivers I an others have written have ASYN_CANBLOCK which could explain why the deadlock has not been observed before.
>
> Mark
>
>
>
> -----Original Message-----
> From: Pearson, Matthew R. [mailto:[email protected]]
> Sent: Monday, April 16, 2018 3:19 PM
> To: [email protected]
> Cc: [email protected]; Mark Rivers <[email protected]>
> Subject: Re: Possible deadlock issue with asyn layer?
>
> Hi Mark,
>
> It’s been several years since I looked into this so I’m rusty, but I think if ASYN_CANBLOCK is set then the dbScanLock() should be released after the command to the driver has been placed on the port thread queue, so that database processing can continue while the asynManager port thread is handling the call to the driver and the device IO.
>
> If ASYN_CANBLOCK is not set then the dbScanLock() is held while the driver is called. Then the read and the write could be interleaved so that this sequence of events happen:
>
> Driver Thread:
> 1) Driver reads new value from device
> 2) Driver calls lock()
> Database Thread:
> 3) The output record is processed and dbScanLock() is called Driver Thread:
> 4) Driver calls callParamCallbacks(), which results in a call to dbScanLock(), which blocks waiting for 3) to finish Database Thread:
> 5) The output driver function calls lock(), which blocks waiting for 2) to finish.
>
> I’ve seen similar deadlocks before when ASYN_CANBLOCK is not set. However, I’m not familiar with how asyn:READBACK is implemented so I could be missing something there. Did removing the use of asyn:READBACK change the behavior?
>
> Cheers,
> Matt
>
>
>> On Apr 16, 2018, at 2:15 PM, Mark Davis <[email protected]> wrote:
>>
>> NOTE:  Mark Rivers seems to have just about confirmed my assumptions, but since I wrote all this, I am sending it anyways in case it is of use to someone else in the future who stumbles across this discussion and is looking for a bit more detail on the issue that was discussed (and why it can happen).
>>
>> Mark Davis
>> ---------------------------------------------------------------------
>> -
>> -----
>>
>> Hi Matt,
>>
>> The ASYN_CANBLOCK flag is definitely NOT set (on purpose).
>>
>> As I understand it, the purpose of that flag is to tell the asyn layer that calls to driver functions from the asyn layer (e.g. the writeXxx() functions) might block.  In general, this is used when the writeXxx() functions attempt to communicate with one or more devices directly, using some medium/interface that may cause the calling thread to block waiting for some I/O operations to complete.
>>
>> When set, this causes the asyn layer to create a queue for write operations and a thread to service that queue (i.e. remove a value from the queue and call the appropriate writeXxx() function in the driver), so if one of the driver's writeXxx() functions does block, the only thread that is affected is the one servicing the queue for that driver.
>>
>> Our driver's writeXxx() functions never do any I/O so they never block:  They simply update the value to be processed by the thread spawned by the driver's constructor.  And as with most (all?) calls to the driver functions, our driver's writeXxx() functions will never be called until the asyn layer has acquired the asyn lock for the driver instance (so as long as our driver always obtains the same lock before calling asyn layer functions, there are no concurrency issues).
>>
>> The key assumption in my theory is when the calls to dbScanLock() occur.
>>
>> For writes, I am assuming the following sequence (all done by a CAC-client thread in our case):
>>     - write to the output record
>>     - call dbScanLock(pRec)
>>     - call pRec->process(pRec)
>>     - call to the write func specified in the Device Support Entry Table (which, in this case, calls an asyn layer function to process the new setting)
>>     - call function in the asyn layer (which calls it own lock() func to get the asyn-layer lock for a specific instance of the driver)
>>     - call our driver's writeXxx() function
>>
>> For reads:
>>     - driver reads new values from device
>>     - driver calls lock() to get asyn-layer lock for itself)
>>     - driver calls setXxxParam() and/or setParamStatus() for each new parameter value
>>     - driver calls callParamCallbacks(), which calls functions in 
>> asyn's the devEpics interface which calls dbScanLock(pRec) and posts 
>> the new value to the record
>>
>> If all this is correct, then writes require getting the scan lock 1st and the asyn lock 2nd, while reads require getting the asyn lock 1st and the scan lock 2nd.  And telling the asyn layer that our driver might block wouldn't change the order the locks are acquired:  It would just create a queue and another thread to manage the calls to our driver's writeXxx() functions.
>>
>> So far, I have found nothing that contradicts these assumptions, but I have not dug deep enough in to all the code to be sure all my assumptions are correct.
>>
>> Mark
>>
>>
>>
>> On 4/16/2018 11:57 AM, Pearson, Matthew R. wrote:
>>> Hi,
>>>
>>> Is it possible that the ASYN_CANBLOCK flag is not set when the asynPortDriver object is created?
>>>
>>> Cheers,
>>> Matt
>>>
>>>> On Apr 16, 2018, at 11:32 AM, Mark Davis <[email protected]> wrote:
>>>>
>>>> We have been experiencing a deadlock problem with one of our custom drivers based on the asynPortDriver class, and after reviewing the code and information gained via the gdb debugger, I have a theory regarding the cause that I would like others to review and tell me if there are any flaws they can spot (either in my logic or my assumptions).
>>>>
>>>>
>>>> First, a summary of the driver that is exhibiting the problem:
>>>>
>>>>     - Non-blocking driver based on asynPortDriver class
>>>>     - One device per driver instance
>>>>     - Communication with device handled by a thread launched by each instance of the driver
>>>>     - All asyn calls by driver thread surrounded by lock()/unlock() calls
>>>>     - info(asyn:READBACK, "1") tag used in some of the output records
>>>>     - writeXxx() calls don't do any I/O:  They save the new value to be processed later
>>>>         (and sometimes call setXxxParam(), setParamStatus(), and
>>>> callParamCallbacks() functions)
>>>>
>>>>     The driver thread runs periodically to update readings and process new settings as follows:
>>>>        - reads values form the controller
>>>>        - calls lock()
>>>>        - calls setXxxParam() and setParamStatus() as needed
>>>>        - calls callParamCallbacks() to update the records affected by the changes (acquiring the scan lock for each record while it is processed)
>>>>        - calls unlock()
>>>>        - Send new settings to the controller (using lock()/unlock() 
>>>> when calling any asyn functions or driver values that are touched 
>>>> by the writeXxx() functions)
>>>>
>>>>     Concurrently with the driver threads usual processing, writes to output records occur.
>>>>     The processing of each write (I assume) includes the following sequence of events:
>>>>        - The thread that did the write (a CAC-client thread, in this case) gets the scan lock for the record and begins processing the record
>>>>        - Processing of the record includes calls to asyn layer functions, which requires acquiring the asyn lock for the driver instance before calling one of its writeXxx() functions
>>>>        - The asyn lock for the driver is released
>>>>        - The scan lock for the record is released
>>>>
>>>> What is happening:
>>>>
>>>>     The driver thread sometimes blocks indefinitely waiting to 
>>>> acquire the scan lock for an output record it needs to update
>>>>
>>>>
>>>> My theory as to what can cause this:
>>>>
>>>>     What we know:
>>>>        - The driver thread blocks indefinitely waiting for the scan lock for an output record that includes the info(asyn:READBACK, "1") tag
>>>>          (see extract from gdb output below and the template for the record)
>>>>        - The lockup of the driver thread is always preceded by a CA client write to an output record for one of the driver's parameters
>>>>           (haven't yet confirmed it is the same record that the 
>>>> driver is waiting for when it hangs)
>>>>
>>>>     What I believe is happening:
>>>>        - Sometime after the driver calls lock() and when it blocks waiting for the scan lock for an output record, a CA client thread writes to the same record.  It successfully acquires the scan lock for the record, but then blocks waiting to acquire the asyn lock for the driver instance (which the driver thread already has).
>>>>
>>>>
>>>> Obviously, if I am correct, the easiest way to avoid the problem is to eliminate the use of the info(asyn:READBACK, "1") tag (at least for any case where this could happen).  We don't actually need those tags for this database anymore, so that is something we will be trying shortly.
>>>>
>>>> But can anyone point out a mistake in my reasoning or my assumptions?  IS this something we need to be aware of when using the info tag?
>>>>
>>>> Mark Davis
>>>> NSCL/FRIB Control Systems Software Engineer [email protected]
>>>>
>>>> ========================== from gdb 
>>>> ===================================
>>>>
>>>> #0  __lll_lock_wait () at
>>>> ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
>>>>
>>>> #1  0x00007f294a554494 in _L_lock_952 () from
>>>> /lib/x86_64-linux-gnu/libpthread.so.0
>>>>
>>>> #2  0x00007f294a5542f6 in __GI___pthread_mutex_lock
>>>> (mutex=0x18db030) at ../nptl/pthread_mutex_lock.c:114
>>>>
>>>> #3  0x00007f294adc17a3 in mutexLock (id=0x18db030) at
>>>> ../../../src/libCom/osi/os/posix/osdMutex.c:46
>>>>
>>>> #4  0x00007f294adc197f in epicsMutexOsdLock (pmutex=0x18db030)
>>>>      at ../../../src/libCom/osi/os/posix/osdMutex.c:130
>>>>
>>>> #5  0x00007f294adb85fb in epicsMutexLock (pmutexNode=0x18db070)
>>>>      at ../../../src/libCom/osi/epicsMutex.cpp:143
>>>>
>>>> #6  0x00007f294b019733 in dbScanLock (precord=0xe9d7d0) at
>>>> ../../../src/ioc/db/dbLock.c:265
>>>>
>>>> #7  0x00007f294bf8e18d in interruptCallbackOutput (drvPvt=0x1d08640, pasynUser=0x1d08f98, value=0)
>>>>      at ../../asyn/devEpics/devAsynUInt32Digital.c:500
>>>>
>>>> #8  0x00007f294bf74540 in paramList::uint32Callback (this=0xb3cd50, command=358, addr=0,
>>>>      interruptMask=4294967295) at
>>>> ../../asyn/asynPortDriver/asynPortDriver.cpp:628
>>>>
>>>> #9  0x00007f294bf74adc in paramList::callCallbacks (this=0xb3cd50, addr=0)
>>>>      at ../../asyn/asynPortDriver/asynPortDriver.cpp:750
>>>>
>>>> #10 0x00007f294bf76417 in asynPortDriver::callParamCallbacks (this=0xb3ce60, list=0, addr=0)
>>>>      at ../../asyn/asynPortDriver/asynPortDriver.cpp:1510
>>>>
>>>> #11 0x00007f294bf763a7 in asynPortDriver::callParamCallbacks (this=0xb3ce60)
>>>>      at ../../asyn/asynPortDriver/asynPortDriver.cpp:1496
>>>>
>>>> ...
>>>> ___________________________________________________________________
>>>> _ _____________ Jump to any frame in the callstack to examine 
>>>> arguments (frame 6, here):
>>>>
>>>> (gdb) f 6
>>>> #6  0x00007f294b019733 in dbScanLock (precord=0xe9d7d0) at ../../../src/ioc/db/dbLock.c:265
>>>> 265     ../../../src/ioc/db/dbLock.c: No such file or directory.
>>>> ___________________________________________________________________
>>>> _
>>>> _____________
>>>> (gdb) p precord
>>>>
>>>> $1 = (dbCommon *) 0xe9d7d0
>>>> (gdb) p *(dbCommon *)precord
>>>>
>>>> $2 = {name = "MPS_FPS:MSTR_N0001:Slave11_MaskBit42-Sel", '\000' <repeats 20 times>,
>>>>    desc = '\000' <repeats 40 times>, asg = '\000' <repeats 28 
>>>> times>, scan = 0, pini = 0, phas = 0, <snipped>
>>>>
>>>> ___________________________________________________________________
>>>> _
>>>> _____________
>>>>
>>>> record(bo,
>>>> "$(SYS)_$(SUB):$(DEV)_$(INST):Slave$(SLAVE)_MaskBit$(BIT)-Sel")
>>>> {
>>>>    field(DTYP, "asynUInt32Digital")
>>>>    field(OUT,  "@asynMask($(ASYN_PORT),0,$(ASYN_MASK),$(TIMEOUT)) slave$(SLAVE)Mask$(MASK_REG)_Set")
>>>>    field(ZNAM,"Mask")
>>>>    field(ONAM,"Unmask")
>>>>    info(asyn:READBACK, "1")
>>>>    info(autosaveFields, "VAL")
>>>> }
>>>>
>>>>
>>


Replies:
Re: Possible deadlock issue with asyn layer? Davis, Mark
References:
Possible deadlock issue with asyn layer? Mark Davis
Re: Possible deadlock issue with asyn layer? Pearson, Matthew R.
Re: Possible deadlock issue with asyn layer? Mark Davis
Re: Possible deadlock issue with asyn layer? Pearson, Matthew R.
RE: Possible deadlock issue with asyn layer? Mark Rivers
Re: Possible deadlock issue with asyn layer? Davis, Mark

Navigate by Date:
Prev: Re: Possible deadlock issue with asyn layer? Davis, Mark
Next: asyn4.33 compilation error on windows-64 Vishnu Patel
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  <20182019  2020  2021  2022  2023  2024 
Navigate by Thread:
Prev: Re: Possible deadlock issue with asyn layer? Davis, Mark
Next: Re: Possible deadlock issue with asyn layer? Davis, Mark
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  <20182019  2020  2021  2022  2023  2024 
ANJ, 20 Apr 2018 Valid HTML 4.01! · Home · News · About · Base · Modules · Extensions · Distributions · Download ·
· Search · EPICS V4 · IRMIS · Talk · Bugs · Documents · Links · Licensing ·