1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 <2019> 2020 2021 2022 2023 2024 2025 | Index | 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 <2019> 2020 2021 2022 2023 2024 2025 |
<== Date ==> | <== Thread ==> |
---|
Subject: | Re: [EXTERNAL] Re: How to share asyn's queue thread among ports? |
From: | Klemen Vodopivec via Tech-talk <[email protected]> |
To: | "Konrad, Martin" <[email protected]>, "[email protected]" <[email protected]> |
Date: | Mon, 3 Jun 2019 13:09:09 -0400 |
Hi Martin,I understand your point and agree that 500 idling threads would generally not be a problem for Linux OS. It is our specific application that we can't afford a lot of context switching. It's a data acquisition software which includes detector communication, due to the fact we're using same physical connection. Data throughput peaks over 100MB/s or about 12 mio events/s. Software design is similar to AreaDetector but deals with event packets around 4k big - number of interrupts can get quite high to the point we're doing coalescing to minimize context switching. This is where it gets interesting. We're using asyn generic pointer interface for data as well as asynInt32 interface for detector params, with 500 detectors that means 500 plugins receiving 100MB/s. In theory all plugins receive the packets and they decide what they're interested in, but in practice we're filtering packet types and also disconnecting plugins when not needed. There's other CPU intensive data processing plugins that we can't afford to loose events.
I just through in a few figures but I can share ICALEPCS paper for more details. We've been running this way for several years now but we're looking for changes and are evaluating are options. Careful redesign with many number of threads for detector communication is one option.
-- Klemen On 5/31/19 1:42 PM, Konrad, Martin wrote:
Hi Klemen,The obvious solution is to use asyn's blocking mode, but there's up to 500 ports resulting in as many threads.Could you please elaborate on why you think this is a problem? It might not look pretty but in practice CPU overhead might actually be small: Asyn is using threads because it needs to wait for asynchronous operations to complete. The pattern is generally: process some data (which takes a very short time), go to sleep and wait for the next event/interrupt. Whenever you send a thread to sleep the scheduler is performing a context switch. Let's say you have 1000 events/s to process with a single thread. You wake up 1000 times/s and go to sleep 1000 times/s resulting in 1000 context switches/s. If you are processing the 1000 events with 1000 threads, each of them wakes up once per second and goes to sleep. Again 1000 context switches/s total. I'm simplify here but CPU overhead might not be as bad as you might think in the first place. Memory overhead of course is a different story. -Martin