EPICS Controls Argonne National Laboratory

Experimental Physics and
Industrial Control System

1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  <20092010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024  Index 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  <20092010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
<== Date ==> <== Thread ==>

Subject: Re: RTEMS stack sizes
From: Eric Norum <[email protected]>
To: Kate Feng <[email protected]>
Cc: tech-talk Techtalk <[email protected]>
Date: Thu, 22 Jan 2009 09:52:52 -0600
Would it make sense to provide an environment variable (EPICS_THREAD_STACK_SIZE_MULTIPLIER?) or several such variables (EPICS_THREAD_STACK_SIZE_SMALL, _MEDIUM, _LARGE) to allow the default values to be overridden?

On Jan 22, 2009, at 9:48 AM, Kate Feng wrote:

Eric Norum wrote:
Right -- a huge FLNK chain will eat up lots of stack. We found this when we tried to FLNK 128 records....

I'm trying to find a size that's "good enough" for reasonable cases. The current RTEMS sizes are about the same as those of vXWorks non-68k architecture machines and about twice that of vxWorks 68k machines.
It is a good idea to find a size that's "good enough" for default. It would be nice
that those size could be custom defined at the
configure/os/CONFIG.Common.RTEMS-bsp for various needs of
applications.

Kate



On Jan 22, 2009, at 9:23 AM, Jeff Hill wrote:

Note that the CAS-client thread calls db_put_field, and as I recall the stack usage by db_put_field is dependent on the longest chain of records
that might be manipulated in the database (due to the recursive
implementation of the database).

-----Original Message-----
From: [email protected] [mailto:[email protected] ]
On Behalf Of Eric Norum
Sent: Wednesday, January 21, 2009 8:01 PM
To: tech-talk Techtalk
Subject: RTEMS stack sizes

I'd like to solicit information from the EPICS community on the stack
allocation for RTEMS IOCs.  Please check the stack usage of some of
your RTEMS IOCs and send me the results. I would like to find out if it is possible and desirable to reduce the stack sizes to reduce the
memory footprint for each client.

Here's the stack usage for one of the IOCs here (a ColdFire uCDIMM
5282):

  PID   PRI STATE   %CPU %STK  NAME
09010001 255 READY   35.8    0  IDLE
0a01004c 180 READY   10.6   18  CAS-event
0a01001c 134 Wmutex   9.7   23  scan0.2
0a010004  10 Wevnt    6.0   19  ntwk
0a010006  10 Wevnt    6.0   22  FECr
0a01005b 180 READY    4.2   18  CAS-event
0a010044 180 READY    3.5   19  CAS-event
0a010071 180 READY    3.5   19  CAS-event
0a010005  10 Wevnt    3.2   22  FECt
0a010031 179 READY    3.1   19  save_restore
0a010020 183 READY    3.0   22  CAS-UDP
0a01001a 136 Wmutex   2.2   23  scan1
0a010019 137 Wmutex   1.5   23  scan2
0a01003f 180 READY    1.3   16  CAS-event
0a010021 180 READY    1.2   18  CAS-event
0a010007  10 Wevnt    0.9   22  RPCd
0a01002e 179 READY    0.7   20  CAS-client
0a01006a 179 READY    0.7   20  CAS-client
0a01002c 179 Wevnt    0.4   20  CAS-client
0a010048 179 Wevnt    0.3   20  CAS-client
0a010018 138 Wmutex   0.3   23  scan5
0a010042 179 Wevnt    0.3   20  CAS-client
0a01004e 179 Wevnt    0.1   20  CAS-client
0a01001d 133 Wmutex   0.1   24  scan0.1
0a010053 180 READY    0.0   19  CAS-event
0a01000d 149 Wmutex   0.0   20  L0
0a01001b 135 Wmutex   0.0   24  scan0.5
0a010011 129 Wmutex   0.0   23  timerQueue
0a010073 180 READY    0.0   19  CAS-event
0a01002b 179 Wevnt    0.0   20  CAS-client
0a010012 140 Wmutex   0.0   23  cbLow
0a010017 139 Wmutex   0.0   20  scan10
0a01005c 108 READY    0.0   16  SPY
0a01000e 109 Wevnt    0.0   17  timeBroadcastMonitor
0a010027 148 DELAY    0.0   23  seqAux
0a010067 180 Wmutex   0.0   24  CAS-event
0a010065 180 Wmutex   0.0   24  CAS-event
0a010066 179 Wevnt    0.0   23  CAS-client
0a01005f 180 Wmutex   0.0   19  CAS-event
0a010062 179 Wevnt    0.0   20  CAS-client
0a010078 179 Wevnt    0.0   23  CAS-client
0a010076 179 Wevnt    0.0   23  CAS-client
0a010074 179 Wevnt    0.0   20  CAS-client
0a01006d 180 Wmutex   0.0   19  CAS-event
0a010057 180 Wmutex   0.0   24  CAS-event
0a010043 179 Wevnt    0.0   23  CAS-client
0a01004b 179 Wevnt    0.0   20  CAS-client
0a010054 179 Wevnt    0.0   20  CAS-client
0a010052 179 Wevnt    0.0   23  CAS-client
0a010051 180 Wmutex   0.0   19  CAS-event

As you can see, no thread is using more then one quarter of its
allocated stack space.

The current stack allocations are:
   epicsThreadStackSmall:  stackSize =  8000
   epicsThreadStackMedium: stackSize = 12000
   epicsThreadStackBig:   stackSize = 16000

Perhaps these could be reduced by 30 to 50 percent???


Each client results in the allocation of a 'Big' stack for the CAS-
client thread and a 'Medium' stack for the CAS-event thread.

--
Eric Norum <[email protected]>
Advanced Photon Source
Argonne National Laboratory
(630) 252-4793



--Eric Norum <[email protected]>
Advanced Photon Source
Argonne National Laboratory
(630) 252-4793





--
Eric Norum <[email protected]>
Advanced Photon Source
Argonne National Laboratory
(630) 252-4793



Replies:
RE: RTEMS stack sizes Dalesio, Leo
Re: RTEMS stack sizes Kate Feng
References:
RTEMS stack sizes Eric Norum
Re: RTEMS stack sizes Eric Norum
Re: RTEMS stack sizes Kate Feng

Navigate by Date:
Prev: Re: RTEMS stack sizes Kate Feng
Next: RE: RTEMS stack sizes Dalesio, Leo
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  <20092010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
Navigate by Thread:
Prev: Re: RTEMS stack sizes Kate Feng
Next: RE: RTEMS stack sizes Dalesio, Leo
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  <20092010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
ANJ, 31 Jan 2014 Valid HTML 4.01! · Home · News · About · Base · Modules · Extensions · Distributions · Download ·
· Search · EPICS V4 · IRMIS · Talk · Bugs · Documents · Links · Licensing ·