Experimental Physics and Industrial Control System
With the win32 implementation of epicsMutex I solved this problem as
follows. This solution uses the win32 InterlockedIncrement and
InterlockedDecrement system calls which probably have no equivalent in
the versions of posix commonly available at this time, but there may be
other similar solutions using a global mutex on POSIX. The
epicsMutexLockWithTimeout call is almost never used in base. It seems
that if you avoid any high overhead calls prior to the first
pthread_mutex_trylock() then you may be able to get away with some
additional global locking overhead in the perhaps less common situation
where they don't get the lock immediately when they ask for it? This
solution also has the drawback that an event semaphore, in addition to
the mutex semaphore, must be allocated for each epicsMutex object.
With this type of code we of course need to be very aware of which
system calls cause an SMP cache flush to occur. For example, this
solution assumes that the LeaveCriticalSection() call causes an SMP
cache flush to occur.
Interested in you comments on this approach.
Jeff
epicsShareFunc epicsMutexLockStatus epicsShareAPI
epicsMutexLockWithTimeout ( epicsMutexId pSem, double timeOut )
{
static const unsigned mSecPerSec = 1000u;
if ( ! TryEnterCriticalSection ( &pSem->os.cs.mutex ) ) {
DWORD begin = GetTickCount ();
DWORD delay = 0;
DWORD tmo;
if ( timeOut <= 0.0 ) {
tmo = 1;
}
else if ( timeOut >= INFINITE / mSecPerSec ) {
tmo = INFINITE - 1;
}
else {
tmo = ( DWORD ) ( ( timeOut * mSecPerSec ) + 0.5 );
if ( tmo == 0 ) {
tmo = 1;
}
}
assert ( pSem->os.cs.waitingCount < 0x7FFFFFFF );
// this causes a cache flush on MP systems
InterlockedIncrement ( &pSem->os.cs.waitingCount );
while ( ! TryEnterCriticalSection ( &pSem->os.cs.mutex ) ) {
DWORD current;
WaitForSingleObject ( pSem->os.cs.unlockSignal, tmo -
delay );
current = GetTickCount ();
if ( current >= begin ) {
delay = current - begin;
}
else {
delay = ( 0xffffffff - begin ) + current + 1;
}
if ( delay >= tmo ) {
assert ( pSem->os.cs.waitingCount > 0 );
// this causes a cache flush on MP systems
InterlockedDecrement ( &pSem->os.cs.waitingCount );
return epicsMutexLockTimeout;
}
}
assert ( pSem->os.cs.waitingCount > 0 );
// this causes a cache flush on MP systems
InterlockedDecrement ( &pSem->os.cs.waitingCount );
}
return epicsMutexLockOK;
}
epicsShareFunc void epicsShareAPI
epicsMutexUnlock ( epicsMutexId pSem )
{
LeaveCriticalSection ( &pSem->os.cs.mutex );
if ( pSem->os.cs.waitingCount ) {
DWORD status = SetEvent ( pSem->os.cs.unlockSignal );
assert ( status );
}
}
> -----Original Message-----
> From: Marty Kraimer [mailto:[email protected]]
> Sent: Monday, December 02, 2002 2:27 PM
> To: Jeff Hill; Johnson, Andrew N.; [email protected]
> Subject: POSIX recursive mutex
>
> A problem using pthread_mutex to implement epicsMutex is that
> pthreads does not
> provide a pthread_mutex_lock with a timeout. Thus the only way
> I see to
> implement epicsMutexLockWithTimeout is to do something like
>
> timeleft = timeout;
> while(timeleft>0.0) {
> if(pthread_mutex_trylock(...)==success) break;
> epicsThreadSleep(shortTime);
> timeleft -= shortTime;
> }
>
> I am not bsure this is a good idea. What do you think?
>
> Marty
- Replies:
- Re: POSIX recursive mutex Marty Kraimer
- Re: POSIX recursive mutex Marty Kraimer
- References:
- POSIX recursive mutex Marty Kraimer
- Navigate by Date:
- Prev:
POSIX recursive mutex Marty Kraimer
- Next:
epicsMutexLockWithTimeout - where is it called Jeff Hill
- Index:
<2002>
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
- Navigate by Thread:
- Prev:
POSIX recursive mutex Marty Kraimer
- Next:
Re: POSIX recursive mutex Marty Kraimer
- Index:
<2002>
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024