Solved Need a quad port Ethernet?

I am wondering if having a quad port adapter would be a good idea for my whiz bang new FreeBSD instance. I’m guessing that I can parallelize network traffic with it which will make the instance considerably more responsive? Is this the case? Do all of the NICs share an IP or what?

I apologize for the vagueness, but this is new territory for me.
 
Typically you'd use lagg(4) configured as LACP to produce a 4gb/s interface, to which you assign your network parameters.

You'll need the matching capability on your switch and the 4 ports connected to the 4 above will also be configured in whatever the vendor calls a lagg (Enterasys=Port Aggregate Group, Cisco=Port Channel).

Note however that this doesn't create a 'pure' 4gb/s interface. The flows are typically hashed with the 5 tuple of protocol,src addr,src port,dst addr,dst port onto one of the 4 ports.

The net effect is if you copy a file between two hosts, you'll get 1gb/s. But you should be able to parallel a 2nd copy to a different host (depending on how the hash algorithm assigns the flows) and get an aggregate of 2gb/s, etc.

More responsive? No, the propagation times remain the same and technically it'll be very slightly slower because there's more network code to run through.

So if you are talking to a couple of different hosts connecting to this system, I doubt you'd see much improvement. If you are talking to 20+ you should see decent distribution across all the ports.
 
Great answer. Very helpful. So, if it’s sitting in my office and I just do stuff like ssh and rsync between 6-7 devices, it wouldn’t really be a benefit?
 
Heh. Networking. "I have a broadband connection that is 400M down and 50 M up. Why can't my internal 1G network upload at 1G?"

Lagg (Link Aggregation) bundles multiple physical ports into a single "virtual port". In theory lagg of 2 1GB ports gives you 2GB of bandwidth (reality a little less), but you need to look at the overall data path. Big/Fast at one point usually means "we can get there faster to wait" so there is a lot of hurry up and wait.

A lot also depends on a usage pattern. A server that has a 10G interface in theory could talk at 1G to 10 clients assuming the server can saturate that 10G link.

I'm old enough to remember that 10M thicknet was more than fast enough for hundreds of employees
 
6-7 devices all rsync'd at the same time would see benefit only if they hashed decently over all the links. That's hard to predict without testing.

It really depends, as mer says, on the traffic pattern.
 
Exactly: traffic pattern. Let's assume you have managed to convince the networking code on both ends of the connection to bundle the 4 lines into one virtual one (in and of itself a difficult thing to do). Consider three scenarios, and in each compare 4 parallel links of speed X with one single link of speed 4X.
  • There is very little traffic, all 4 links are idle. You send one packet and expect one response packet. Each of those will have to travel on one of the 4 links, and won't go any faster than it would have on a single one. You have not helped latency at all (perhaps even hurt slightly), and throughput is an irrelevant question in this situation. Latency on a 4 time faster link would probably be somewhat better.
  • The network is busy enough to be "near" 100% saturation (by near, I mean half or 3/4 saturated). You send one packet and expect one response. In this case, having multiple parallel links will help, since it is probably (and even statistically likely) that your packets will find a free link to travel on, instead of having to wait for the previous packet to get done. So having 4 links helps with latency, but total throughput is still not limited by the hardware. Having a 4 times faster link would be even better, since that faster link would be idle some of the time.
  • The network is oversaturated, all links have queues on them. The queue depth is controlled by some sort of balking algorithm (QoS, timeouts and such) so it doesn't grow infinitely. Having 4 links in parallel helps compared to 1 link, simply because the total queue size is smaller, so less wait; it also increases throughput by a factor of 4. But having a single 4x faster link would be even better, since then the networking stack doesn't have to play games to put packets on separate links, which can lead to one of the links being momentarily idle (underutilized).
In summary: Most people care more about latency than bandwidth. Having multiple parallel links without increasing the link speed helps in certain situations (moderately loaded network), but not in general. If costs scale linearly, it makes more sense to just buy a faster link. Having parallel links only makes sense if faster links get a lot more expensive (superlinear).
 
A few random thoughts:
  • LAGG(4) is primarily a protocol for trunking between routers;
  • LAGG also has options for redundancy and hardware fail-over;
  • LAGG supports multiple aggregation protocols which can deliver very different outcomes;
  • not all 4-port NICs can achieve full bandwidth on each port -- I have Intel ones that do not, so caveat emptor;
  • you can spend a lot of time and effort testing the various LAGG options; and
  • these days 2.5 Gbit Ethernet is cheap and common, and 10 Gbit is affordable.
If I were starting again, I would forget about LAGG (except for where I wanted redundant link fail-over), trunk at 10 Gbit, and get 2.5Gbit NICs for the hosts. That would save a lot of switch ports, cables, and setup time.
 
Back
Top