EPICS Controls Argonne National Laboratory

Experimental Physics and
Industrial Control System

1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024  <2025 Index 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024  <2025
<== Date ==> <== Thread ==>

Subject: Re: Channel Access performance with RTEMS on MVME5500
From: Heinz Junkes via Tech-talk <tech-talk at aps.anl.gov>
To: "Wells, Alex (DLSLtd,RAL,LSCI)" <alex.wells at diamond.ac.uk>
Cc: "tech-talk at aps.anl.gov" <tech-talk at aps.anl.gov>
Date: Thu, 20 Mar 2025 12:11:46 +0100
Hello Alex,
This reminds me a little of a priority problem I noticed when we switched to POSIX_API.
There is still a confusion in RTEMS with the assignment of POSIX-Prios and RTEMS-Prios.
POSIX_Init is started with prio 2. This means the second highest priority after RTEMS, but with Posix it is a low priority.

That's why it looks like this in rtems_init.c at the moment:

```
void *
POSIX_Init ( void *argument __attribute__((unused)))
{
    int                 result;
    char                *argv[3]         = { NULL, NULL, NULL };
    rtems_status_code   sc;
    struct timespec     now;
    char timeBuff[100];

    initConsole ();

    /*
     * Explain why we're here
     */
    logReset();

    /*
     * If RTEMS is used with the POSIX API, the init task
     * 'POSIX_Init()' is unfortunately given the priority '2'. This
     * corresponds to the second lowest POSIX prio (RTEMS pthread
     * prio 253). This task shoud have IOCsh prio.
     */
    pthread_attr_t attr;
    struct sched_param  param;
    int policy;
    sc  = pthread_attr_init(&attr);
    assert(sc == RTEMS_SUCCESSFUL);
    sc = pthread_attr_getschedpolicy(&attr, &policy);
    assert(sc == RTEMS_SUCCESSFUL);

    param.sched_priority = (sched_get_priority_max(policy)
                            - sched_get_priority_min(policy))
                         * epicsThreadPriorityIocsh / 100;

    sc = pthread_setschedparam(pthread_self(), policy, &param);
    assert(sc == RTEMS_SUCCESSFUL);
…
```

Can you check with which priority the threads are really running?

On the RTEMS shell with the command ‘pthread’, ’task’ (and ‘irq’):

I use an ‘RTEMS-shell-wrapper’ from Michael on EPICS

MVME6100, RTEMS6 wiht legacy-stack:


IOC1> rt pthread
ID       NAME                 SHED PRI STATE  MODES    EVENTS WAITINFO
------------------------------------------------------------------------------
0b010001  _main_              UPD   24 READY  P:T:nA   NONE
0b010002  scan-0.5            UPD   93 SEM    P:T:nA   NONE
0b010003  osdNTP_Runner       UPD   27 SYSEV  P:T:nA   NONE
0b010004  taskwd              UPD  229 SEM    P:T:nA   NONE
0b010005  osdNTP_Monitor      UPD  128 SEM    P:T:nA   NONE
0b010006  NTPTimeSync         UPD   27 SEM    P:T:nA   NONE
0b010007  errlog              UPD  229 SEM    P:T:nA   NONE
0b010008                      UPD   27 SEM    P:T:nA   NONE
0b010009  timerQueue          UPD  103 SEM    P:T:nA   NONE
0b01000a  timerQueue          UPD  103 SEM    P:T:nA   NONE
0b01000b  CAS-event           UPD  206 SEM    P:T:nA   NONE
0b01000c  awgport1 update     UPD  100 SEM    P:T:nA   NONE
0b01000d  CAS-client          UPD  204 SYSEV  P:T:nA   NONE
0b01000e  awgport2 update     UPD  100 SEM    P:T:nA   NONE
0b01000f  timerQueue          UPD   77 SEM    P:T:nA   NONE
0b010010  cbLow               UPD  105 SEM    P:T:nA   NONE
0b010011  cbMedium            UPD   93 SEM    P:T:nA   NONE
0b010012  cbHigh              UPD   75 SEM    P:T:nA   NONE
0b010013  dbCaLink            UPD  128 SEM    P:T:nA   NONE
0b010014  PVAL                UPD  128 SEM    P:T:nA   NONE
0b010015  timerQueue          UPD  108 SEM    P:T:nA   NONE
0b010016  PDB-event           UPD  206 SEM    P:T:nA   NONE
0b010017  pvAccess-client     UPD  166 SEM    P:T:nA   NONE
0b010018  UDP-rx 0.0.0.0:     UPD  128 SYSEV  P:T:nA   NONE
0b010019  UDP-rx 10.0.0.2     UPD  128 SYSEV  P:T:nA   NONE
0b01001a  UDP-rx 10.0.1.2     UPD  128 SYSEV  P:T:nA   NONE
0b01001b  UDP-rx 224.0.0.     UPD  128 SYSEV  P:T:nA   NONE
0b01001c  scanOnce            UPD   85 SEM    P:T:nA   NONE
0b01001d  scan-10             UPD  103 SEM    P:T:nA   NONE
0b01001e  scan-5              UPD  100 SEM    P:T:nA   NONE
0b01001f  scan-2              UPD   98 SEM    P:T:nA   NONE
0b010020  scan-1              UPD   95 SEM    P:T:nA   NONE
0b010021  scan-0.2            UPD   90 SEM    P:T:nA   NONE
0b010022  scan-0.1            UPD   88 SEM    P:T:nA   NONE
0b010023  CAS-TCP             UPD  209 SYSEV  P:T:nA   NONE
0b010024  CAS-UDP             UPD  214 SYSEV  P:T:nA   NONE
0b010025  CAS-beacon          UPD  211 TIME:IS P:T:nA   NONE   Nanosleep
0b010026  CAC-event           UPD  125 SEM    P:T:nA   NONE
0b010027  reccaster           UPD  128 SYSEV  P:T:nA   NONE
0b010028  PVAS timers         UPD  191 SEM    P:T:nA   NONE
0b010029  TCP-acceptor        UPD  128 SYSEV  P:T:nA   NONE
0b01002a  UDP-rx 0.0.0.0:     UPD  128 SYSEV  P:T:nA   NONE
0b01002b  UDP-rx 10.0.0.2     UPD  128 SYSEV  P:T:nA   NONE
0b01002c  UDP-rx 10.0.1.2     UPD  128 SYSEV  P:T:nA   NONE
0b01002d  UDP-rx 224.0.0.     UPD  128 SYSEV  P:T:nA   NONE
0b01002e  CAS-event           UPD  206 SEM    P:T:nA   NONE
0b01002f  CAS-client          UPD  204 SYSEV  P:T:nA   NONE
0b010032  save_restore        UPD  204 MSG    P:T:nA   NONE   1b010001
0b010033  CAC-event           UPD  201 SEM    P:T:nA   NONE
0b010034  CAS-event           UPD  206 SEM    P:T:nA   NONE
0b010035  CAS-client          UPD  204 SYSEV  P:T:nA   NONE
0b010036  CAS-event           UPD  206 SEM    P:T:nA   NONE
0b010037  CAS-client          UPD  204 SYSEV  P:T:nA   NONE
0b010038  CAS-event           UPD  206 SEM    P:T:nA   NONE
0b010039  CAS-client          UPD  204 SYSEV  P:T:nA   NONE
0b01003a  CAS-event           UPD  206 SEM    P:T:nA   NONE
0b01003b  CAS-client          UPD  204 SYSEV  P:T:nA   NONE
0b01003d  CAS-event           UPD  206 SEM    P:T:nA   NONE
0b01003e  CAS-event           UPD  206 SEM    P:T:nA   NONE
0b01003f  CAS-client          UPD  204 SYSEV  P:T:nA   NONE
0b010040  CAS-event           UPD  206 SEM    P:T:nA   NONE
0b010041  CAS-client          UPD  204 SYSEV  P:T:nA   NONE
0b010042  CAS-event           UPD  206 SEM    P:T:nA   NONE
0b010043  CAS-client          UPD  204 SYSEV  P:T:nA   NONE
0b010044  CAS-event           UPD  206 SEM    P:T:nA   NONE
0b010045  CAS-client          UPD  204 SYSEV  P:T:nA   NONE
0b010046  CAS-event           UPD  206 SEM    P:T:nA   NONE
0b010047  CAS-client          UPD  204 SYSEV  P:T:nA   NONE
0b010048  CAS-event           UPD  166 SEM    P:T:nA   NONE
0b010049  CAS-client          UPD  163 SYSEV  P:T:nA   NONE
0b01004a  CAS-event           UPD  166 SEM    P:T:nA   NONE
0b01004b  CAS-client          UPD  163 SYSEV  P:T:nA   NONE
0b010050  CAS-client          UPD  204 SYSEV  P:T:nA   NONE
0b010051  CAS-event           UPD  206 SEM    P:T:nA   NONE
0b010052  CAS-client          UPD  204 SYSEV  P:T:nA   NONE
0b010053  CAS-event           UPD  206 SEM    P:T:nA   NONE
0b010054  CAS-client          UPD  204 SYSEV  P:T:nA   NONE
IOC1>

IOC1> rt task
ID       NAME                 SHED PRI STATE  MODES    EVENTS WAITINFO
------------------------------------------------------------------------------
0a010001 _BSD                 UPD   10 SYSEV  P:T:nA   NONE
0a010002 _BSD                 UPD   10 SYSEV  P:T:nA   NONE
0a010003 RPCd                 UPD   10 EV     P:T:nA   NONE
0a010004 TNTa                 UPD  100 SYSEV  P:T:nA   NONE
0a010005 TNTb                 UPD  100 SYSEV  P:T:nA   NONE
0a010006 TNTc                 UPD  100 SYSEV  P:T:nA   NONE
0a010007 TNTd                 UPD  100 SYSEV  P:T:nA   NONE
0a010008 TNTe                 UPD  100 SYSEV  P:T:nA   NONE
0a010009 TNTD                 UPD  100 SYSEV  P:T:nA   NONE


IOC1> rt irq
-------------------------------------------------------------------------------
                             INTERRUPT INFORMATION
--------+----------------------------------+---------+------------+------------
 VECTOR | INFO                             | OPTIONS | HANDLER    | ARGUMENT
--------+----------------------------------+---------+------------+------------
--------+----------------------------------+---------+------------+——————


( Example on a beaglebone black with libbsd-stack )

beaglebone> rt pthread
ID       NAME                 SHED PRI STATE  MODES    EVENTS WAITINFO
------------------------------------------------------------------------------
0b010001  _main_              UPD   25 READY  P:T:nA 00002000
0b010003  errlog              UPD  229 SEM    P:T:nA   NONE
0b010004  osdNTP_Runner       UPD   27 READY  P:T:nA   NONE
0b010005  osdNTP_Monitor      UPD  128 SEM    P:T:nA   NONE
0b010006  NTPTimeSync         UPD   27 SEM    P:T:nA   NONE
0b010007  taskwd              UPD  229 SEM    P:T:nA   NONE
0b010008                      UPD   27 SEM    P:T:nA   NONE
0b010009  timerQueue          UPD   77 SEM    P:T:nA   NONE
0b01000a  cbLow               UPD  105 SEM    P:T:nA   NONE
0b01000b  cbMedium            UPD   93 SEM    P:T:nA   NONE
0b01000c  cbHigh              UPD   75 SEM    P:T:nA   NONE
0b01000d  dbCaLink            UPD  128 SEM    P:T:nA   NONE
0b01000e  scanOnce            UPD   85 SEM    P:T:nA   NONE
0b01000f  scan-10             UPD  103 SEM    P:T:nA   NONE
0b010010  scan-5              UPD  100 SEM    P:T:nA   NONE
0b010011  scan-2              UPD   98 SEM    P:T:nA   NONE
0b010012  scan-1              UPD   95 READY  P:T:nA   NONE
0b010013  scan-0.5            UPD   93 SEM    P:T:nA   NONE
0b010014  scan-0.2            UPD   90 READY  P:T:nA   NONE
0b010015  scan-0.1            UPD   88 READY  P:T:nA   NONE
0b010016  CAS-TCP             UPD  209 WK     P:T:nA   NONE   accept
0b010017  CAS-UDP             UPD  214 WK     P:T:nA   NONE   sbwait
0b010018  CAS-beacon          UPD  211 TIME:IS P:T:nA   NONE   Nanosleep


beaglebone> rt task
ID       NAME                 SHED PRI STATE  MODES    EVENTS WAITINFO
------------------------------------------------------------------------------
0a010001 TIME                 UPD   98 READY  P:T:nA   NONE
0a010002 IRQS                 UPD   96 READY  P:T:nA   NONE
0a010003 _BSD swi6: Giant tas UPD  100 EV     P:T:nA   NONE
0a010004 _BSD config_0        UPD  100 WK     P:T:nA   NONE   -
0a010005 _BSD kqueue_ctx task UPD  100 WK     P:T:nA   NONE   -
0a010006 _BSD swi6: task queu UPD  100 EV     P:T:nA   NONE
0a010007 _BSD swi5: fast task UPD  100 EV     P:T:nA   NONE
0a010008 _BSD thread taskq    UPD  100 WK     P:T:nA   NONE   -
0a010009 _BSD swi1: netisr 0  UPD  100 EV     P:T:nA   NONE
0a01000a _BSD usbus0          UPD  100 WK     P:T:nA   NONE   -
0a01000b _BSD usbus0          UPD  100 WK     P:T:nA   NONE   -
0a01000c _BSD usbus0          UPD  100 WK     P:T:nA   NONE   -
0a01000d _BSD usbus0          UPD  100 WK     P:T:nA   NONE   -
0a01000e _BSD usbus0          UPD  100 WK     P:T:nA   NONE   -
0a01000f _BSD usbus1          UPD  100 WK     P:T:nA   NONE   -
0a010010 _BSD usbus1          UPD  100 WK     P:T:nA   NONE   -
0a010011 _BSD usbus1          UPD  100 WK     P:T:nA   NONE   -
0a010012 _BSD usbus1          UPD  100 WK     P:T:nA   NONE   -
0a010013 _BSD usbus1          UPD  100 WK     P:T:nA   NONE   -
0a010014 _BSD bufdaemon       UPD  100 READY  P:T:nA   NONE
0a010015 _BSD vnlru           UPD  100 WK     P:T:nA   NONE   vlruwt
0a010016 _BSD syncer          UPD  100 WK     P:T:nA   NONE   syncer
0a010017 _BSD softirq_0       UPD  100 WK     P:T:nA   NONE   -
0a010018 DHCP                 UPD  254 WK     P:T:nA   NONE   select
0a010019 _BSD bufspacedaemon- UPD  100 READY  P:T:nA   NONE
0a01001a _BSD nfscl           UPD  100 WK     P:T:nA   NONE   nfscl
0a01001b TNTa                 UPD  100 SYSEV  P:T:nA   NONE
0a01001c TNTb                 UPD  100 SYSEV  P:T:nA   NONE
0a01001d TNTc                 UPD  100 SYSEV  P:T:nA   NONE
0a01001e TNTd                 UPD  100 SYSEV  P:T:nA   NONE
0a01001f TNTe                 UPD  100 SYSEV  P:T:nA   NONE
0a010020 TNTD                 UPD  100 WK     P:T:nA   NONE   accept


beaglebone> rt irq
-------------------------------------------------------------------------------
                             INTERRUPT INFORMATION
--------+----------------------------------+---------+------------+------------
 VECTOR | INFO                             | OPTIONS | HANDLER    | ARGUMENT
--------+----------------------------------+---------+------------+------------
     18 | IRQS                             |  SHARED | 0x803d6100 | 0x8072b4a8
     19 | IRQS                             |  SHARED | 0x803d6100 | 0x80757410
     28 | IRQS                             |  SHARED | 0x803d6100 | 0x807287f8
     30 | BBB_I2C                          |  UNIQUE | 0x803d6b6c | 0x8065e5c8
     40 | IRQS                             |  SHARED | 0x803d6100 | 0x8078a4d0
     41 | IRQS                             |  SHARED | 0x803d6100 | 0x8078a538
     42 | IRQS                             |  SHARED | 0x803d6100 | 0x8078a5a0
     43 | IRQS                             |  SHARED | 0x803d6100 | 0x8078a608
     64 | IRQS                             |  SHARED | 0x803d6100 | 0x80727a90
     67 | Clock                            |  UNIQUE | 0x803d381c | 0x0
     70 | BBB_I2C                          |  UNIQUE | 0x803d6b6c | 0x8065e4f0
     72 | NS16550                          |  SHARED | 0x803d4db8 | 0x0
--------+----------------------------------+---------+------------+——————

Heinz

------------------------------------------------------------------------------
Fritz-Haber-Institut    | Phone:         (+49 30) 8413-4270
Heinz Junkes             | Fax (G3+G4):   (+49 30) 8413-5900
Faradayweg 4-6        | VC: https://zoom.fhi.berlin/junkes
D - 14195 Berlin        | E-Mail:        junkes at fhi-berlin.mpg.de
------------------------------------------------------------------------------
“Sorry I’m a bit late, had a terrible time…
All sort of things cropping up at the last moment. Uh, how are we for time?”
—Zarquon's address to Milliways

> On 18. Mar 2025, at 17:41, Wells, Alex (DLSLtd,RAL,LSCI) via Tech-talk <tech-talk at aps.anl.gov> wrote:
>
> Hi Michael,
>
> I can confirm the CPU use goes to zero when the network cable is unplugged. We do image and file loading over the network, but once all that has completed I can pull the cable out and observe no CPU use by that thread. The system also becomes fully responsive on the console, which correlates the significant drop in CPU use.
>
> I have only tested RTEMS 5.1, which I believe is using the "legacy" IP stack. In a few months, during the next shutdown, I may be able to test RTEMS6 but we have no particular reason to expect the newer stack to work better, as the MVME5500 is 30 years old now so the new stack is unlikely to be optimized for its use case.
>
> Thanks,
> Alex WellsFrom: Michael Davidsaver <mdavidsaver at gmail.com>
> Sent: Friday, March 14, 2025 4:39 PM
> To: Wells, Alex (DLSLtd,RAL,LSCI) <alex.wells at diamond.ac.uk>
> Cc: tech-talk at aps.anl.gov <tech-talk at aps.anl.gov>
> Subject: Re: Channel Access performance with RTEMS on MVME5500
>  Hello Alex,
>
> As a quick test, have you tried unplugging the ethernet cable to ensure that
> the reported CPU usage for CAS-UDP goes to zero?
>
> Also, which RTEMS version(s) have you tested?
>
> Is this the RTEMS "legacy" IP stack, or the newer libbsd stack?
>
>
> On 3/14/25 06:30, Wells, Alex (DLSLtd,RAL,LSCI) via Tech-talk wrote:
> > Hello Tech Talk,
> >
> > I'm in the process of moving some VME crates from VxWorks to RTEMS. This also moves from EPICS 3.14.12.7 to EPICS 7.0.7. I'm seeing significant amounts of CPU activity in the "CAS-UDP" thread, and was hoping someone may be able to explain it.
> >
> > Averaged over time, RTEMS reports that the CAS-UDP thread is using ~40% of the CPU. This persists even when removing all the processing from all records (thus removing all other possible CPU use).  During initialization of the IOC, and for several minutes after initialization finished, I see 80%+ CPU usage on the IOC overall, mostly on this one thread, and it makes the overall IOC very unresponsive. This same behaviour also happens intermittently during operation, where even record processing seems to become delayed due to Channel Access processing - for example the devIocStats Heartbeat record that should just increase by 1 per second sometimes pauses for multiple seconds.
> >
> > The network the IOC is on is requesting a fair number of PVs from this IOC. CaSnooper shows between 20 and 40 requests per second to PV(s) on this IOC. Approximately 500 PVs are being requested from the IOC. These requests are spread across approximately 30 separate clients.
> >
> > When I compare this to the VxWorks version, running on the same hardware and network, I see less almost 0% CPU usage on the CAS-UDP thread, and no record processing hitches. The CPU also idles significantly lower than for RTEMS.
> >
> > Our current theory is that the RTEMS network stack for UDP processing is much less efficient, and is struggling with the volume of requests. Is anyone able to confirm/deny this, and does anyone have any potential workarounds?
> >
> > Thanks,
> > Alex Wells
> > Diamond Light Source
> >
> > This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail. Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd.
> > Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
> > Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom.
>
> This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail. Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd.
> Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
> Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom.


Attachment: smime.p7s
Description: S/MIME cryptographic signature


Replies:
Re: Channel Access performance with RTEMS on MVME5500 Wells, Alex (DLSLtd,RAL,LSCI) via Tech-talk
References:
Channel Access performance with RTEMS on MVME5500 Wells, Alex (DLSLtd,RAL,LSCI) via Tech-talk
Re: Channel Access performance with RTEMS on MVME5500 Michael Davidsaver via Tech-talk
Re: Channel Access performance with RTEMS on MVME5500 Wells, Alex (DLSLtd,RAL,LSCI) via Tech-talk

Navigate by Date:
Prev: Re: Converting Application using sumo Johnson, Andrew N. via Tech-talk
Next: Re: Channel Access performance with RTEMS on MVME5500 Wells, Alex (DLSLtd,RAL,LSCI) via Tech-talk
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024  <2025
Navigate by Thread:
Prev: Re: Channel Access performance with RTEMS on MVME5500 Johnson, Andrew N. via Tech-talk
Next: Re: Channel Access performance with RTEMS on MVME5500 Wells, Alex (DLSLtd,RAL,LSCI) via Tech-talk
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024  <2025
ANJ, 20 Mar 2025 Valid HTML 4.01! · Home · News · About · Base · Modules · Extensions · Distributions ·
· Download · Search · IRMIS · Talk · Documents · Links · Licensing ·