1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 <2011> 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 | Index | 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 <2011> 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 |
<== Date ==> | <== Thread ==> |
---|
Subject: | RE: Does EPICS Base support multi-thread on vxWorks 6.3? |
From: | "Jeff Hill" <[email protected]> |
To: | <[email protected]>, <[email protected]> |
Date: | Mon, 18 Apr 2011 16:02:58 -0600 |
Hi Lorna, Sorry about delay responding, I was away on a business trip the latter part of last week. Ø We developed a client based on libca (EPICS Base 3.14.9), which will create it's Ø own worker thread on construction. But we found it working strangely on vxWorks Ø (ppc-603). Whenever a second EPICS client is created (meaning there are now Ø 2 EPICS worker threads on vxWorks), the CPU usage became 100%. Ø Ø From the profile, we found the ca_pend_event is consuming lots of the CPU. Ø Meanwhile, it seems epicsMutex is failing to do the lock for the second EPICS client?? With the ca client library a ca client context is created either explicitly by calling a library function, or implicitly the first time that a function in the library is called. When creating a new thread, that thread chooses to attach itself to some preexisting ca client context or alternatively to explicitly or implicitly create a new context. Furthermore, each ca client context can choose to receive callbacks only when the thread that created the context is executing in the library, or to receive callbacks asynchronously from auxiliary library threads at any time. In single threaded applications ca_pend_event is often called periodically to process ca background activities, but ca_pend_event is typically not called by applications executing in preemptive callback mode. Additional details are in the manual at this URL. http://www.aps.anl.gov/epics/base/R3-14/12-docs/CAref.html The ca_pend_event function will always block for the interval that you specify so I would not expect that 100% of the CPU would be consumed unless the delay argument passed in was close to zero. Furthermore, without stack traces its difficult to make any judgments either for or against the assertion that the mutex isn’t working correctly. In the attached image I think I see that there are some indications concerning where the program use CPU time, but I also see that 1.20/26.59 = 4.5% is directly used by ca_pend_event and the rest presumably is ancestor (called) functions. I do see that epicsMutex::lock is using some CPU, and also epicsMutex::tryLock. If the issue is with ca_pend_event calling epicsMutex::lock then my first suspicion is that ca_pend_event is being called with a small timeout. Otherwise, any CPU consumed by epicsMutex:: tryLock is really a mystery because a quick search reveals that this function is never called in EPICS base. Perhaps NI uses this function? Setting a breakpoint in epicsMutex::tryLock and within ni::monads::epics::epicsThreadDelegator::proc might be interesting. Once you have a break in the relevant location then move up and down the call stack to see who is calling these functions, how often are they calling them, and with what arguments. Happy to answer any more detailed questions that you might have, Jeff ______________________________________________________ Message content: TSPA With sufficient thrust, pigs fly just fine. However, this is not necessarily a good idea. It is hard to be sure where they are going to land, and it could be dangerous sitting under them as they fly overhead. -- RFC 1925 From: [email protected] [mailto:[email protected]] On Behalf Of [email protected]
|