I have definitely needed to change some network parameters in order to avoid lost packets when running multiple GigE cameras on Linux. The recommended values I got were from point Grey in this article on their Web site:
https://www.ptgrey.com/KB/10016
They recommend:
net.core.rmem_max=1048576
net.core.rmem_default=1048576
The net.core.rmem_default value is ~8 times larger than the value Eric is using, while the net.core.rmem_max is ~8 times smaller than his value.
I saw a large improvement in performance when I changed from the default of ~128K to ~1M for both of these parameters. That was on Fedora 15. I am currently running Centos 7, and I have made the same change, but I don't have proof that it made a difference.
Mark
-----Original Message-----
From: [email protected] <[email protected]> On Behalf Of Kyle Lanclos
Sent: Wednesday, September 5, 2018 1:12 PM
To: [email protected]
Cc: [email protected]
Subject: Re: Linux network tuning
It's been a long time since I felt the need to do network tuning on a Linux host. Back when I still felt that need (some 15+ years ago) I attempted to tune my way to better throughput on a gigabit network; everything I tried made the performance worse than the automatic behavior in the stock Linux kernel.
I don't doubt that there are special cases where tuning matters. None of my fairly aggressive cases were special enough to cross that threshold. I wouldn't be surprised if these delays are somewhere
else-- have you confirmed your name resolution setup is correct?
--Kyle
On Wed, Sep 5, 2018 at 8:06 AM Eric Norum <[email protected]> wrote:
>
> Are there guidelines for tuning the network parameters for a Linux host running a bunch of EPICS soft IOCs? We were seeing some issues with CA clients seeing delays of over a second for ‘get' requests. Setting the values shown here:
>
> net.core.netdev_max_backlog = 5000
> net.core.rmem_max = 8388608
> net.core.wmem_max = 8388608
> net.core.rmem_default = 124928
> net.core.wmem_default = 124928
> net.ipv4.tcp_rmem = 4096 87380 8388608 net.ipv4.tcp_wmem = 4096 65536
> 8388608 net.ipv4.tcp_mem = 8388608 8388608 8388608
>
> seems to have improved things. But I don’t know if those numbers, which I got from a google search, are way too big, or not yet big enough or completely weird.
> Suggestions welcomed.
>
> Machine details:
>
> 8 cores
> 24 GiB RAM
> 1 TB SSD
> Gigabit Ethernet
> EPICS R3.15.4
>
> Load average in the 3 to 4 range but CPU utilization as shown by ‘top’ showed at least 60% idle even when clients were experiencing the slow response.
>
> --
> Eric Norum
> [email protected]
>
>
>
- Replies:
- Re: Linux network tuning Mazanec Tomáš
- References:
- Linux network tuning Eric Norum
- Re: Linux network tuning Kyle Lanclos
- Navigate by Date:
- Prev:
RE: Linux scan thread suspended Mark Rivers
- Next:
CSS Archiver Vishnu Patel
- Index:
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
<2018>
2019
2020
2021
2022
2023
2024
- Navigate by Thread:
- Prev:
Re: Linux network tuning Kyle Lanclos
- Next:
Re: Linux network tuning Mazanec Tomáš
- Index:
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
<2018>
2019
2020
2021
2022
2023
2024
|