1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 <2011> 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 | Index | 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 <2011> 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 |
<== Date ==> | <== Thread ==> |
---|
Subject: | Re: epicsMessageQueue Linux PREEMPT_RT |
From: | Till Straumann <[email protected]> |
To: | Andrew Johnson <[email protected]> |
Cc: | Eric Norum <[email protected]>, [email protected] |
Date: | Fri, 28 Jan 2011 19:38:27 -0600 |
On 01/28/2011 03:34 PM, Andrew Johnson wrote:
On Friday 28 January 2011 12:30:59 Eric Norum wrote:From the indenting style I infer that Marty Kraimer wrote the code and had me commit it......Marty *never* uses GNU-style function definitions (putting the routine name at the beginning of a new line after the return type and any decorations), so I still think this had to be your code. He might have written the vxWorks implementation, but I doubt that you would have listed yourself as the Author if that was the case. The vxWorks implementation doesn't appear to need anything like an eventNode, and I'm wondering if the free-list is even necessary — couldn't the eventNode just be allocated on the stack of the thread that's waiting? Every call to getEventNode() is followed after the epicsEventWait*() by a line that puts it back on the eventFreeList. I guess the important question is how much work it would be to create a new epicsEventId every time.
Not sure an eventNode is necessary at all (see my earlier message).This example is incomplete (only receiver blocks, queue length a power of two etc.)
#define MOD_Q_LEN(x) ((x) & ((1<<LD_MQ_LEN)-1)) struct MQ { epicsMutexId mtx; void *buf[ (1<<LD_MQ_LEN) ]; unsigned hd, tl, n_waiting; epicsEventId sync; }; int mqSend( struct MQ *q, void *msg ) { epicsMutexLock( q->mtx ); if ( (q->tl - q->hd) >= (1<<LD_MQ_LEN) ) { /* full */ epicsMutexUnlock( q->mtx ); return -1; } /* store message */ q->buf[ MOD_Q_LEN( q->tl ) ]; q->tl++; if ( q->n_waiting ) { /* blocked threads */ q->n_waiting-- epicsEventSignal( q->sync ); } epicsMutexUnlock( q->mtx ); return 0; } void * mqRecv( struct MQ *q ) { void *msg; epicsMutexLock( q->mtx ); /* Current implementation of epicsEventWait() using * pthread_cond_wait() may spuriously unblock multiple * threads as a result of a single epicsEventSignal(). * See pthread_cond_signal(3) */ while ( q->hd == q->tl ) { q->n_waiting++; /* increment count of blocked threads */ epicsMutexUnlock( q->mtx ); epicsEventWait( q->sync ); } msg = q->buf[ MOD_Q_LEN( q->hd ) ]; q->hd++; epicsMutexUnlock( q->mtx ); return msg; } FWIW -- Till
- Andrew