Argonne National Laboratory

Experimental Physics and
Industrial Control System

2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  <20132014  2015  2016  2017  2018  Index 2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  <20132014  2015  2016  2017  2018 
<== Date ==> <== Thread ==>

Subject: New API for batched dbGet/dbPut
From: Michael Davidsaver <>
Date: Tue, 26 Feb 2013 14:57:10 -0500
For some time now I've had the idea that it would be useful to have API for executing a list of get/put DB operations. What would make this most useful, and most difficult to implement, would be an option to make such batch operations atomic (wrt. record locking). I've done some investigations into how this might be implemented, and I think I have a clear strategy. However, there are two questions I need to get answered before I can proceed.

1) Can the three functions from dbLock.h mentioned below be removed (replaced)?

2) Is the proposed API for batch operations acceptable?

The primary impediment I see to implementing atomic locking of an arbitrary list of records is the current lockset API. In particular the following function definitions:

void dbLockSetGblLock(void);
void dbLockSetGblUnlock(void);
void dbLockSetRecordLock(struct dbCommon *precord);

The way this API is defined requires that dbLockSetRecordLock() be called for each record sequentially. This is helpful if the records to be locked ahead of time is not known. Unfortunately it uses a global lock to avoid deadlock conditions, which also serializes all operations.

The only way I can think of to avoid the global lock would be to allow the operation to fail if it would cause a deadlock condition. This is less than desirable since these failures would not be predictable.

So I am left wanting to replace these functions with something like:

struct lockNode {
  ELLNODE node;
  dbCommon *precord;
void dbScanLockMany(ELLLIST *lockNodes);
void dbScanUnlockMany(ELLLIST *lockNodes);

So far I have seen the dbLockSet* functions used only within Base, specifically dbPutFieldLink() in dbAccess.c during modification of a link field.

Enabling dbPutFieldLink() to use this new API would require parsing the new link field value before locking any records. The goal is to determine in advance if the a new DB link will (or might be) created. I think all the necessary context information can be read w/o locking the record. This would give a list of all the records which need to be locked to complete the entire operation, allowing the new locking API to be used.

Now the implementation issue is avoiding deadlocks. The best strategy I've come up with so far is to assign all locksets to a global order. The simplest way is to use a pointer address. Each operation would start by sorting the list of records, and always lock from lowest to highest.

Of course lock set re-computations could change the ordering. My plan here is to reference count the lockSets. So a dbFieldOperations() call will first take a reference to all the necessary locksets while holding a global lock. Then it will lock all of the locksets. Finally it will check to see that the association between records and locksets hasn't changed, and retry if it has.

The next question is how to define the DB batch operation API. My current thoughts are:

struct DBOp {
  ELLNODE node;
  unsigned int op; // 0=get, 1=put
  void *userbuf;
  long *options;
  epicsUInt32 nRequest;
  void *pflin; // get only

DBOp* allocDBOp(void);
void freeDBOp(DBOp*);

#define DBOP_FAST   0  // meaning low latency
#define DBOP_ATOMIC 1

int dbFieldOperations(ELLLIST *dbops, unsigned long flags);

The alloc and free functions would serve to enable a free-list, and also to allow parts of the structure to be hidden (including the lockNode struct). The DBOP_* macros would be used to give the 'flags' argument.

I've done some prototypes which worked (didn't deadlock), however I have some concerns about the time needed to acquire several locks, and the contention this would cause. I think the worst case multi-lock latency for a single dbScanLockMany() call would be the sum of the worst case latencies of each individual lock from other sources (plain dbScanLock). This would then be multiplied by the number of concurrent, overlapping, dbScanLockMany() calls.

In cases involving a few locksets this would likely not be significant, but in complex situations could be quite large. So perhaps some requests could be restricted?

Does anyone has thoughts on better (or other) ways to accomplish this?

Re: New API for batched dbGet/dbPut Michael Davidsaver

Navigate by Date:
Prev: [Merge] lp:~epics-core/epics-base/array-opt into lp:epics-base mdavidsaver
Next: Proposal to add site-specific TOP_RULES Benjamin Franksen
Index: 2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  <20132014  2015  2016  2017  2018 
Navigate by Thread:
Prev: [Merge] lp:~epics-core/epics-base/array-opt into lp:epics-base noreply
Next: Re: New API for batched dbGet/dbPut Michael Davidsaver
Index: 2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  <20132014  2015  2016  2017  2018 
ANJ, 18 Nov 2013 Valid HTML 4.01! · Home · News · About · Base · Modules · Extensions · Distributions · Download ·
· Search · EPICS V4 · IRMIS · Talk · Bugs · Documents · Links · Licensing ·