Edit: link to workaround
I have a (to me) strange issue with either the drivers in FreeBSD or my assumptions about PCIe and the PCI hierarchy.
Using Qemu, I have the following structure:
Tl;dr would be:
The issue presents itself with:
Running
Looking at the dmesg it'll show:
And
I'm out on very deep water here, and no one that I know of knows why/how the NIC's end up on the same IRQ.
Does anyone know where to begin debugging this or what's going on, I would prefer to be able to have multiple PCIe network cards on the same PCIe bus without having to reconfigure the machine manually each time, so I'm guessing there's some Qemu specifics I could give it in order for FreeBSD to detect and map the devices correctly?
I've also tried
Here's a overview of an earlier version with just one NIC to get a sense visually for how everything is tied together (or the goal at least):
Thanks in advance!
I have a (to me) strange issue with either the drivers in FreeBSD or my assumptions about PCIe and the PCI hierarchy.
Using Qemu, I have the following structure:
qemu-system-x86_64 \
-cpu host \
-enable-kvm \
-machine q35,accel=kvm \
-device intel-iommu \
-m 4096 \
-display none \
-qmp unix:/tmp/pfsense.qmp,server,nowait \
-monitor unix:/tmp/pfsense.monitor,server,nowait \
-drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/OVMF_CODE.fd \
-drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/OVMF_VARS.fd \
-spice port=5930,disable-ticketing \
-device pxb-pcie,id=pcie.1,bus_nr=1 \
\
-device pcie-root-port,bus=pcie.1,id=root_port1,slot=0,chassis=0x1,addr=0x5 \
-device e1000,mac=00:00:00:41:7f:01,id=network0.0,netdev=network0,bus=root_port1,bootindex=5 \
-netdev tap,ifname=tap1,id=network0,script=no,downscript=no \
\
-device pcie-root-port,bus=pcie.1,id=root_port2,slot=1,chassis=0x2,addr=0x10 \
-device e1000,mac=00:00:00:41:7f:02,id=network1.0,netdev=network1,bus=root_port2,bootindex=6 \
-netdev tap,ifname=tap2,id=network1,script=no,downscript=no \
\
-device ide-hd,drive=hdd0,bus=ide.0,id=scsi0,bootindex=1 \
-drive file=./pfsense.qcow2,if=none,format=qcow2,discard=unmap,aio=native,cache=none,id=hdd0 \
-device ide-cd,drive=cdrom0,bus=ide.1,id=scsi1,bootindex=2 \
-drive file=./pfSense-CE-2.5.0-DEVELOPMENT-amd64-20201221-0250.iso,media=cdrom,if=none,format=raw,cache=none,id=cdrom0
Tl;dr would be:
Code:
pcie.1
|-- root_port1
| |-- tap1
|-- root_port2
| |-- tap2
|-- AHCI
|-- hdd0
|-- cdrom0
The issue presents itself with:
Code:
interrupt storm detected on "irq10:"; throttling interrupt source
interrupt storm detected on "irq10:"; throttling interrupt source
interrupt storm detected on "irq10:"; throttling interrupt source
interrupt storm detected on "irq10:"; throttling interrupt source
Running
vmstat -i
it will give me:
Code:
interrupt total rate
irq1: atkbd0 72 0
irq10: em0: irq0+++ 2256461 1826
irq16: ahci 13151 11
cpu0: timer 59774 48
Total 2329458 1885
Looking at the dmesg it'll show:
Code:
pcib2: <PCI-PCI bridge> mem 0xc1641000-0xc1641fff irq 10 at device 5.0 on pci1
em0: <Intel(R) PRO/1000 Network Connection> port 0x8000-0x803f mem 0xc1400000-0xc141ffff irq 10 at device 0.0 on pci2
pcib3: <PCI-PCI bridge> mem 0xc1640000-0xc1640fff irq 10 at device 16.0 on pci1
em1: <Intel(R) PRO/1000 Network Connection> port 0x7000-0x703f mem 0xc1200000-0xc121ffff irq 10 at device 0.0 on pci3
And
pciconf -lv
shows:
Code:
em0@pci0:2:0:0: class=0x020000 card=0x11001af4 chip=0x100e8086 rev=0x03 hdr=0x00
em1@pci0:3:0:0: class=0x020000 card=0x11001af4 chip=0x100e8086 rev=0x03 hdr=0x00
I'm out on very deep water here, and no one that I know of knows why/how the NIC's end up on the same IRQ.
Does anyone know where to begin debugging this or what's going on, I would prefer to be able to have multiple PCIe network cards on the same PCIe bus without having to reconfigure the machine manually each time, so I'm guessing there's some Qemu specifics I could give it in order for FreeBSD to detect and map the devices correctly?
I've also tried
x3130-upstream
devices to create a switch, which caused the same issue. Having them on separate PCIe Busses works, for instance if card 1 is on pcie.0
and card 2 is on pcie.1
. But since I can only add one additional Root Bus, it's not ideal as a solution.Here's a overview of an earlier version with just one NIC to get a sense visually for how everything is tied together (or the goal at least):
Thanks in advance!