Mixed MTUs on different NIC's interfaces on same bare metal border firewall server

Dear FreeBSD Gurus and Networking Engineers!

SETUP
  1. FreeBSD (14+) on bare metal 2xCPU pack server with several NICs (exist both 1G copper and SFP/SFP+ interfaces);
  2. separate LANs for internal monitoring and for other (DB cluster, backend servers, branch office, etc...) physically on separate NICs hardware interfaces;
QUESTION
How different MTU size impact (or may improve) network load on each of Interfaces:
  1. Jumbo MTU 9000 frames on Database servers cluster (so we able to set in pfSense MTU 9000 and offloading);
  2. MTU 1440 (and even less) for Monitoring LAN (Syslog and SNMP packets are typically small (100-500 bytes) (so we able to set MTU 1440 and offloading);
  3. MTU 1500 for any other LANs (so we able to set in MTU 1500 and offloading);
How mixed MTUs impact on FreeBSD overall performance and throughput as this server are BORDER firewall ?
(PCI bus pressure, RAM pressure, etc...)


Extremely happy to read Your opinions and suggestions, especially if You have experience in enterprise / High-Loading environment in DCs!

Thank You so much for Your time and have a nice sunny days, Merry Christmas to all of You and families!
 
I really don't see the advantage of running different MTU speeds.

Either use jumbo frames or don't. Differing MTU sounds like too much work and for what exact reason?
 
I could understand if the monitoring hardware did not support MTU9K like an Arm board but beyond hardware limitations I would go all Jumbo Frames. The cost is small even for your SNMP example.
 
I really don't see the advantage of running different MTU speeds.
Working with huge DBs are ABSOLUTELY different from SNMP monitoring for example.
Either use jumbo frames or don't. Differing MTU sounds like too much work and for what exact reason?
MTU 9000 for SNMP (even SNMP v3) which has average packet size 250-400 bytes or for Prometheus's metrics which are 400-800 bytes ? :)
 
I could understand if the monitoring hardware did not support MTU9K like an Arm board but beyond hardware limitations I would go all Jumbo Frames. The cost is small even for your SNMP example.
Defensively not agree! ;)

And the reason why is (for example): in case SNMP monitoring You need receiving SNMP packets as fast as possible without accumulation it in FreeBSD RX/TX or NIC's buffers and than sending.
This is extremely important, especially in High-Loading environment.

Where am I wrong?
 
Yes but what is the load of the SNMP traffic or monitoring causing to your network?
So a few unneeded oversized packets hurts nothing.
That is my line of thought. Are your switches maxed out? You have to tune for every last ounce?

Also have you benchmarked with offloading? It sounds dreamy but results may vary widely on actual load type.
 
MTU 9000 for SNMP (even SNMP v3) which has average packet size 250-400 bytes or for Prometheus's metrics which are 400-800 bytes ?
MTU is the maximum size the data in a packet can be. Nothing changes for packets that are smaller. Even the standard ethernet MTU of 1500 would be 2-3 times larger than your SNMP packets.

For large data transfers this would matter, as you would need more packets on a lower MTU to transfer the same amount of data. To transfer 1MB you need ~700 packets on MTU=1500 and only ~116 with MTU=9000. So there's less overhead. With SNMP the packet is still going to be 250-400 bytes and it will all be stuffed into 1 packet. That packet will be the size of the data (plus the IP headers etc. of course).
 
I have similar case.
I have two firewalls (carp) with 16 internal VLANs with MTU 9000 on all where I send all data across two QSFP+ in LACP (going to two switchers in MLAG). Large files, public web, DB, backups etc. going here on all VLANs. The firewall have a mgmt port with standard MTU to a switch with 1500 MTU. Before I upgraded the internet connection I had a standard 1 gbps Ethernet connection (on both firewalls) with standard MTU. I had no problem and the bottleneck was the CPUs (if internal traffic), not the 2x40 gbps (pushing data on both connectors) or the 1 gbps (apart from being only 1 gbps internet).

I upgrade to 10 gbps internet fiber, so I run the two (new bigger CPU) Firewalls to another FreeBSD box in front witch I have MTU 9000 via two SFP+ to the firewalls. One SFP+ with standard 1500 MTU to the ISP.

I can push a loot of data inside my network with almost none RTS (2xQSFP+ = 8x10gbps deepening on the traffic). The “problem” is the ISP 1500 MTU.. but.. I don’t send/connect over 6,3 gbps in one stream (there is my bottleneck with 1500 MTU), but I can push 10 gbps to the internet in many streams.

I have run this for years with no problem. So mixing MTU, no problem, depending WHERE you do it!!
All my internal NICs have an MTU on 9000 as well every VMs, LAGG etc. My MGMT is only 1gbp Ethernet.
I only change (to smaller) MTU on VPNs if I notes some drops.

So in your case in nr. 2, I would go standard MTU, no need to change (I run SNMP an other monitoring on 1500 without problem), it’s standard.
1. 9000 MTU
2. 1500 MTU
3. 1500 MTU
 
Back
Top