Experimental Physics and
| |||||||||||||||||||||||
|
Hi
On Centos 7 (RHEL), one can use "tuned" package. I've resolved some issues with transfer of large waveforms over Channel Access with using that. Precisely by setting tuned profile named "network-throughput". The profile increases some net.core.* buffers. Ref to "/usr/lib/tuned/network-throughput/tuned.conf"
simply: sudo tuned-adm profile network-throughput
GigE communication is different in design , but this might help.
Otherwise , hard-core tunning is overwhelming. https://cromwell-intl.com/open-source/performance-tuning/ethernet.html https://cromwell-intl.com/open-source/performance-tuning/tcp.html
Network guys, here, say -- increasing of MTU is the key for both high-throughput via network and lowering utlization of computer's CPU & NIC. Simply using jumbo packets of 9000.
I can only recoomend use of jumbo packets, it did help a lot for the waveforms transfer.
Tomas
Od: [email protected] <[email protected]> za uživatele Mark Rivers <[email protected]>
Odesláno: čtvrtek 6. září 2018 1:21:25 Komu: 'Kyle Lanclos'; [email protected] Kopie: [email protected] Předmět: RE: Linux network tuning I have definitely needed to change some network parameters in order to avoid lost packets when running multiple GigE cameras on Linux. The recommended values I got were from point Grey in this article on their Web site:
https://www.ptgrey.com/KB/10016 They recommend: net.core.rmem_max=1048576 net.core.rmem_default=1048576 The net.core.rmem_default value is ~8 times larger than the value Eric is using, while the net.core.rmem_max is ~8 times smaller than his value. I saw a large improvement in performance when I changed from the default of ~128K to ~1M for both of these parameters. That was on Fedora 15. I am currently running Centos 7, and I have made the same change, but I don't have proof that it made a difference. Mark -----Original Message----- From: [email protected] <[email protected]> On Behalf Of Kyle Lanclos Sent: Wednesday, September 5, 2018 1:12 PM To: [email protected] Cc: [email protected] Subject: Re: Linux network tuning It's been a long time since I felt the need to do network tuning on a Linux host. Back when I still felt that need (some 15+ years ago) I attempted to tune my way to better throughput on a gigabit network; everything I tried made the performance worse than the automatic behavior in the stock Linux kernel. I don't doubt that there are special cases where tuning matters. None of my fairly aggressive cases were special enough to cross that threshold. I wouldn't be surprised if these delays are somewhere else-- have you confirmed your name resolution setup is correct? --Kyle On Wed, Sep 5, 2018 at 8:06 AM Eric Norum <[email protected]> wrote: > > Are there guidelines for tuning the network parameters for a Linux host running a bunch of EPICS soft IOCs? We were seeing some issues with CA clients seeing delays of over a second for ‘get' requests. Setting the values shown here: > > net.core.netdev_max_backlog = 5000 > net.core.rmem_max = 8388608 > net.core.wmem_max = 8388608 > net.core.rmem_default = 124928 > net.core.wmem_default = 124928 > net.ipv4.tcp_rmem = 4096 87380 8388608 net.ipv4.tcp_wmem = 4096 65536 > 8388608 net.ipv4.tcp_mem = 8388608 8388608 8388608 > > seems to have improved things. But I don’t know if those numbers, which I got from a google search, are way too big, or not yet big enough or completely weird. > Suggestions welcomed. > > Machine details: > > 8 cores > 24 GiB RAM > 1 TB SSD > Gigabit Ethernet > EPICS R3.15.4 > > Load average in the 3 to 4 range but CPU utilization as shown by ‘top’ showed at least 60% idle even when clients were experiencing the slow response. > > -- > Eric Norum > [email protected] > > >
| ||||||||||||||||||||||
ANJ, 05 Oct 2018 |
·
Home
·
News
·
About
·
Base
·
Modules
·
Extensions
·
Distributions
·
Download
·
· Search · EPICS V4 · IRMIS · Talk · Bugs · Documents · Links · Licensing · |