10Gb Ethernet NIC PCIe recommendation

olli@

Developer
I'm looking for a hardware recommendation …

I’ve got two machines (located close to each other) that are directly connected with their onboard 1Gb ethernet NICs. However, now that I’ve upgraded both of these machines to NVMe SSD, that 1 Gbps connection becomes a serious bottleneck. Therefore I would like to upgrade the bandwidth on that connection. Both machines have a spare PCIe 16x slot.

Now here’s my question: Which 10Gb ethernet PCIe NICs are well supported by FreeBSD? I know there are several manual pages for 10Gb ethernet drivers, but it’s not clear which of them are well supported and work most efficiently, and which ones work “just ok”. Also, the manual pages often list only the supported chips, but not actual hardware, i.e. names / models of PCIe cards that you can buy (on Amazon or elsewhere).

So, any recommendations? I don’t care about the type of connection (copper, fiber, whatever) because this is just for connecting two machines back-to-back. I won’t need a switch or anything else.
 
Chelsio T5 series or the older T4 series if you want to save some money. T6 if money is no object.
These adapters do run hot so I would say plan on fitting a small fan inside your chassis.
They need alot of airflow.
Part numbers represent number of ports
T520= 2 ports
T540=4 ports
T422=2- 1GB + 2- 10GB

 
Uhm … It didn’t occur to me that the temperature could be a problem. These two machines sit under my desk, so an additional fan is not an option. And I only need one 10Gb port, of course.

Thanks for the link; very interesting reading.
 
I have no personal experience with 10-gbit on FreeBSD at home, but I see several submissions in our new hardware db at the moment:

IntelPCInetwork82599ES 10-Gigabit SFI/SFP+ Network Connection / Ethernet Server Adapter X520-2 »
IntelPCInetwork82599ES 10-Gigabit SFI/SFP+ Network Connection / Ethernet Server Adapter X520-1 »
IntelPCInetworkEthernet Controller 10-Gigabit X540-AT2 (Super) »
IntelPCInetworkEthernet Controller 10-Gigabit X540-AT2 / Ethernet 10G 2P X540-t Adapter »
 
I have no personal experience with 10-gbit on FreeBSD at home, but I see several submissions in our new hardware db at the moment:
Well, that’s no more helpful than the existence of driver manual pages. I mean, I can also just grep the kernel sources for the chip numbers or PCI IDs. It doesn’t say anything about how well these NICs are supported, and what the potential problems could be.

Phishfry’s hint about the temperature was very valuable. In the meanwhile I did a little more research. It turns out that the copper modules (10GBase-T) run especially hot, but it’s much less of a problem with LWL modules. The Intel NICs even switch off the laser when you ifconfig down the interface. I’m currently looking at some 10GbE PCIe cards with Intel chip and single SFP+ interface. These are starting at about 100 €, and apparently they work well with FreeBSD.
 
As you may have noticed dual ports are used in almost all 10G cards.
They are server adapters so you are expected to have redundant interfaces (my guess).
 
Chelsio gives a figure for required airflow
"Requiring maximum 200 LFM airflow at 19W maximum power usage."
200 Linear Feet per Minute.
I have mounted a fan directly on the heatsink like a video card. Does hog up an PCIe slot though.
Well, as I said above, these machines are sitting under my desk, and I don’t want to add any more fans.
Since you are a FreeBSD developer I would be willing to send you two T4 adapters for free.
Send me a PM with your deets if interested.
Thank you very much! But I guess those adapters are not exactly what I’m looking for.

In the meantime I have found several Intel X520 / 82599 based cards with single SFP+ port. These are rated 4.5W power, no extra fan required. I guess I’ll order two of those.
 
Intel does wonky stuff like require specific Intel SFP modules.
There is a sysctl to override.
unsupported hw.ix.unsupported_sfp: 1

ifconfig ixl -vv will show your SFP+ modules details.

Intel 10G cards have also been problematic in the past. You might need to adjust some settings regarding MSI interrupts.
hw.ix.enable_msix=0
 
Thanks for the hints, Phisfry. Much appreciated.

Intel does wonky stuff like require specific Intel SFP modules.
There is a sysctl to override.
unsupported hw.ix.unsupported_sfp: 1
Yeah, I noticed that SFP+ modules are available with various “codings”, e.g. either Intel-compatible, Cisco-compatible, HP-compatible etc. Should be no problem to make sure to get Intel-compatible ones. But it’s good to know that there is a systctl to override the required coding if necessary.
ifconfig ixl -vv will show your SFP+ modules details.
The X520 / 82599 based cards are ix(4) interfaces (a.k.a. ixgbe(4)), not ixl(4). The latter is for X7**-based cards.
Intel 10G cards have also been problematic in the past. You might need to adjust some settings regarding MSI interrupts.
hw.ix.enable_msix=0
Those reports are two years old. I would assume that the situation has improved meanwhile. The link to manuals.ts.fujitsu.com on the forums page that you quoted is broken, unfortunately.

It also appears that the problem only occurred during a high load of small UDP packets. That’s not my typical use case. I’ll rather be using jumbo TCP frames. :)
 
As you may have noticed dual ports are used in almost all 10G cards.
They are server adapters so you are expected to have redundant interfaces (my guess).
I think that’s not true anymore. For 10 GbE[SUP](*)[/SUP] there are quite a lot of cards with a single SFP+ slot. There are even ones that are clearly targeted at consumer machines, not servers, for example the ones made by Asus (there don’t appear to be FreeBSD drivers for these, though).

It is my impression that 10 GbE is now at a point where 1 GbE was about 15 years ago. At that time, FastEthernet (100 Mb) was the broad standard, while 1 GbE was on its way from high end servers to the consumer market.

[SUP](*)[/SUP] Might be different for 25 GbE and 40 GbE adapters; I haven’t had a closer look at these.
 
Most man pages have an "author" section with an email address. You could ask the developer of the network driver before buying a network card.
 
So, any recommendations? I don’t care about the type of connection (copper, fiber, whatever) because this is just for connecting two machines back-to-back. I won’t need a switch or anything else.
The least expensive Intel solution would be two X520-DA1 single-port SFP+ cards (around US $40 each on eBay) and a passive DAC cable. That is a PCIe 2.0 x8 card. If you need an x4 card, you're looking at something like the X550 card. I am using the dual X520 + passive DAC to link pairs of systems at several installations. Most of my servers have X540 twisted-pair cards linking them to 10GbE switches. Desktops have a mix of X540 and X550, depending on what sort of PCIe slots are available (the Dell Precision 3630's that arrived last year have no factory 10GbE option for some reason, and with various other cards which are needed for my application, only a PCIe 3 x4 slot was left).
 
Perhaps resurrecting an old thread, but looking around - this thread seemed to refer to the Intel X550 series to which I'm curious. Looking specifically at this: https://www.asrockrack.com/general/productdetail.asp?Model=X570D4U-2L2T and trying to determine if the two 10G NICs would be fully supported. Most interested in the use case of netmap (thus Suricata). If anyone happens to know if the Intel X550-AT2 is supported, particularly in the context of netmap (Suricata especially for inline mode) would be *greatly* appreciated.

Thanks!
 
Chelsio T5 series or the older T4 series if you want to save some money. T6 if money is no object.
These adapters do run hot so I would say plan on fitting a small fan inside your chassis.
They need alot of airflow.
Part numbers represent number of ports
T520= 2 ports
T540=4 ports
T422=2- 1GB + 2- 10GB

URL Has changed: https://www.servethehome.com/buyers...as-servers/top-picks-freenas-nics-networking/
 
Well, if this thread got bumped anyway I'll add something I just read in the Release 14.1 Thread:
Intel E800 Series (ice(4)): A driver is available for the Intel E800 series’ ice(4) Ethernet network controllers, which support 100 Gb/s operation. This driver has been upgraded to version 1.39.13-k.
I've found Intel Ethernet controllers to be working great on FreeBSD and if you're going to use direct SFP-SFP anyway, might just as well upgrade big. Adapters are a bit pricey though. Group buy idea? ;) Imagine trying to saturate 2x100Gbps.
 
Back
Top