Very slow network connection on Intel em0 card (82540EM Gigabit Ethernet Controller)

Hi,

I'm running FreeBSD 11.2-RELEASE-p9 on dedicated server at Hetzner.

The NIC adapter I have on my server:

Bash:
em0@pci0:0:4:0: class=0x020000 card=0x11001af4 chip=0x100e8086 rev=0x03 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82540EM Gigabit Ethernet Controller'
    class      = network
    subclass   = ethernet
    bar   [10] = type Memory, range 32, base 0xfebc0000, size 131072, enabled
    bar   [14] = type I/O Port, range 32, base 0xc000, size 64, enabled

From some time I've started experiencing very slow network connections. For example I'm downloading packages with ~8kb/s on avarage.. It's very slow.
I did create a ticket with the technical support and asked if it would be possible to run network diagnostics on my server. They came back to me saying that everything works OK on the server itself.

I've upgraded the NIC driver with
Code:
pkg install intel-em-kmod
and rebooted the machine but this didn't help..

Can I ask what's the best way of troubleshooting something like this? How to find the root cause?

Any help would be greatly appreciated.

--
Best regards
macosxgeek
 
Try turning off TSO and/or LRO and see if that improves things.
 
Done.

em0:

options=8023<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL,DEFAULTIF>


I'm afraid it doesn't make any difference.. :(

[1/61] Fetching diaspora-1.1.1_1.txz: 0% 72 KiB 8.2kB/s 46:56:51 ETA
 
Still the same.. It cannot even start the download..


Bash:
fetch -v https://cdn.netbsd.org/pub/NetBSD/NetBSD-8.1/images/NetBSD-8.1-amd64.iso
resolving server address: cdn.netbsd.org:443
SSL options: 83004bff
Peer verification enabled
Using CA cert file: /usr/local/etc/ssl/cert.pem
Verify hostname
TLSv1.2 connection established using ECDHE-RSA-AES128-GCM-SHA256
Certificate subject: /C=US/ST=California/L=San Francisco/O=Fastly, Inc./CN=o.ssl.fastly.net
Certificate issuer: /C=BE/O=GlobalSign nv-sa/CN=GlobalSign CloudSSL CA - SHA256 - G3
requesting https://cdn.netbsd.org/pub/NetBSD/NetBSD-8.1/images/NetBSD-8.1-amd64.iso
fetch: transfer timed out
fetch: NetBSD-8.1-amd64.iso appears to be truncated: 0/757561344 bytes
 
Yeah, that doesn't look good. Is there a firewall running that might be the cause? Just to rule out any misconfiguration. Most of the time the em(4) driver is well behaved and performance is good. I've rarely had problems with it.

Try running tcpdump(1), then start that transfer again. Hopefully that provides some more insights.
 
There is no active firewall running.

I did run:

Code:
tcpdump -vv -x -X -s 1500 -i em0 'port 443'
to grab all the traffic on port TCP/443

I'm attaching the result in this threat.
 

Attachments

Is the -p9 the last update? This may be unrelated, but there were some tcp updates recently IIRC.

No - it's not the latest. The problem is that if I would like to upgrade to anything newer it would take ages with the network speed I have at the moment..
 
I had a similar performance problem with em when I set up a FreeBSD system this year after a gap of many years. There were many error messages in the logs re em0:

Code:
May 23 15:50:37 nuck kernel: em0: <Intel(R) PRO/1000 Network Connection> mem 0xaeb00000-0xaeb1ffff at device 31.6 on pci0
May 23 15:50:37 bsdbox kernel: em0: attach_pre capping queues at 1
May 23 15:50:37 bsdbox kernel: em0: using 1024 tx descriptors and 1024 rx descriptors
May 23 15:50:37 bsdbox kernel: em0: msix_init qsets capped at 1
May 23 15:50:37 bsdbox kernel: em0: Unable to map MSIX table

I found a thread on bugs.freebsd that suggested pkg-installing the updated driver, which fixed it for me, FWIW, but not the same device as you ('Ethernet Connection (6) I219-V').

You don't have a typo in /boot/loader.conf for the updated driver entry, do you?

if_em_updated_load="YES"
 
I have a lot of Hetzner servers using the em0 (mostly EX41 / EX42) and didn't notice any issues with speed. I use the default driver from FreeBSD 12. Also no problems with FreeBSD 11 either as these servers were running FreeBSD 11 one year ago. If you download something from https://speed.hetzner.de is the speed good? Also if you boot in freebsd rescue mode and download a file, is the speed good?
 
From some time I've started experiencing very slow network connections.
...seems to be the important part here meaning it was working properly before that (right?) -- did you (re)configure/(re)install/update anything on the server near that time? The driver couldn't just have gone broken by itself. If you didn't, then the problem is outside of your system (software wise).
 
I've upgraded the NIC driver with
pkg install intel-em-kmod
and rebooted the machine but this didn't help..

You also have to add settings to /boot/loader.conf to tell the system to use a different driver.
Code:
if_em_updated_load="YES"
You should see this message in dmesg.
module_register: cannot register pci/em from kernel; already loaded from if_em_updated.ko
Module pci/em failed to register: 17
 
Have you looked at netstat -nrW to see if anything weird is going on.
How about a route flush just to ensure something is not amiss in your routing table.
Lastly I would be checking resolvconf -lv to see where that leads.
 
Thank you for all the replies guys.. These are very helpful.

What is really odd is that I was doing some FTP uploads (from the server to my house) over the weekend and the speed for them was OK-ish..

Anyway - I won't be wasting anymore of your and my time.. I will do a fresh install with the latest STABLE (I should do it long time ago anyway..). The sad thing about is that I won't be able to attend 'Uptime Contest' anytime soon.. ;-)
 
Uptimes are overrated. A high uptime also indicates that somebody hasn't applied security updates. That's not something you should brag about.
 
Uptimes are overrated.
Some years ago (circa 1997) a friend had placed a small PC running FreeBSD in our sever room.
It sat there, humming away, next to some FreeBSD name and time servers, unused.
He was chasing a kernel bug that only appeared after 12 months continuous uptime.
Another colleague decided, for reasons that escape me, to reboot the fleet.
The uptime on that server went from 51 weeks to one day...
 
Back
Top