EPICS Controls Argonne National Laboratory

Experimental Physics and
Industrial Control System

1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  <20182019  2020  2021  2022  2023  2024  Index 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  <20182019  2020  2021  2022  2023  2024 
<== Date ==> <== Thread ==>

Subject: Re: Linux network tuning
From: Mazanec Tomáš <[email protected]>
To: Mark Rivers <[email protected]>, 'Kyle Lanclos' <[email protected]>, "[email protected]" <[email protected]>
Cc: "[email protected]" <[email protected]>
Date: Thu, 6 Sep 2018 07:43:12 +0000

Hi


On Centos 7 (RHEL), one can use "tuned" package. I've resolved some issues with transfer of large waveforms over Channel Access with using that. Precisely by setting tuned profile named "network-throughput".

The profile increases some net.core.* buffers.

Ref to "/usr/lib/tuned/network-throughput/tuned.conf"


simply:

sudo tuned-adm profile network-throughput


GigE communication is different in design , but this might help.


Otherwise , hard-core tunning is overwhelming.

https://cromwell-intl.com/open-source/performance-tuning/ethernet.html

https://cromwell-intl.com/open-source/performance-tuning/tcp.html

cromwell-intl.com
How to tune the Linux kernel TCP performance to optimize data center NFS performance and other network services.


cromwell-intl.com
How to tune the Linux kernel to optimize Ethernet performance, supporting high performance TCP and NFS in a data center.


Network guys, here, say -- increasing of MTU is the key for both high-throughput via network and lowering utlization of computer's CPU & NIC. Simply using jumbo packets of 9000.

I can only recoomend use of jumbo packets, it did help a lot for the waveforms transfer.


Tomas



Od: [email protected] <[email protected]> za uživatele Mark Rivers <[email protected]>
Odesláno: čtvrtek 6. září 2018 1:21:25
Komu: 'Kyle Lanclos'; [email protected]
Kopie: [email protected]
Předmět: RE: Linux network tuning
 
I have definitely needed to change some network parameters in order to avoid lost packets when running multiple GigE cameras on Linux.  The recommended values I got were from point Grey in this article on their Web site:

https://www.ptgrey.com/KB/10016

They recommend:
net.core.rmem_max=1048576
net.core.rmem_default=1048576

The net.core.rmem_default value is ~8 times larger than the value Eric is using, while the net.core.rmem_max is ~8 times smaller than his value.

I saw a large improvement in performance when I changed from the default of ~128K to ~1M for both of these parameters.  That was on Fedora 15.  I am currently running Centos 7, and I have made the same change, but I don't have proof that it made a difference.

Mark


-----Original Message-----
From: [email protected] <[email protected]> On Behalf Of Kyle Lanclos
Sent: Wednesday, September 5, 2018 1:12 PM
To: [email protected]
Cc: [email protected]
Subject: Re: Linux network tuning

It's been a long time since I felt the need to do network tuning on a Linux host. Back when I still felt that need (some 15+ years ago) I attempted to tune my way to better throughput on a gigabit network; everything I tried made the performance worse than the automatic behavior in the stock Linux kernel.

I don't doubt that there are special cases where tuning matters. None of my fairly aggressive cases were special enough to cross that threshold. I wouldn't be surprised if these delays are somewhere
else-- have you confirmed your name resolution setup is correct?

--Kyle

On Wed, Sep 5, 2018 at 8:06 AM Eric Norum <[email protected]> wrote:
>
> Are there guidelines for tuning the network parameters for a Linux host running a bunch of EPICS soft IOCs?  We were seeing some issues with CA clients seeing delays of over a second for ‘get' requests.  Setting the values shown here:
>
> net.core.netdev_max_backlog = 5000
> net.core.rmem_max = 8388608
> net.core.wmem_max = 8388608
> net.core.rmem_default = 124928
> net.core.wmem_default = 124928
> net.ipv4.tcp_rmem = 4096 87380 8388608 net.ipv4.tcp_wmem = 4096 65536
> 8388608 net.ipv4.tcp_mem = 8388608 8388608 8388608
>
> seems to have improved things.  But I don’t know if those numbers, which I got from a google search, are way too big, or not yet big enough or completely weird.
> Suggestions welcomed.
>
> Machine details:
>
> 8 cores
> 24 GiB RAM
> 1 TB SSD
> Gigabit Ethernet
> EPICS R3.15.4
>
> Load average in the 3 to 4 range but CPU utilization as shown by ‘top’ showed at least 60% idle even when clients were experiencing the slow response.
>
>  --
> Eric Norum
> [email protected]
>
>
>

References:
Linux network tuning Eric Norum
Re: Linux network tuning Kyle Lanclos
RE: Linux network tuning Mark Rivers

Navigate by Date:
Prev: Re: Linux scan thread suspended Pilar Gil Jaldo
Next: asking for help on s7nodave =?gb18030?b?s8nN/rvG5a0=?=
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  <20182019  2020  2021  2022  2023  2024 
Navigate by Thread:
Prev: RE: Linux network tuning Mark Rivers
Next: Re: Linux network tuning Fors, Thomas L.
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  2003  2004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  <20182019  2020  2021  2022  2023  2024 
ANJ, 05 Oct 2018 Valid HTML 4.01! · Home · News · About · Base · Modules · Extensions · Distributions · Download ·
· Search · EPICS V4 · IRMIS · Talk · Bugs · Documents · Links · Licensing ·