Solved bhyve network performance is horrible

Hi,

I have FreeBSD installed on virtual private server and has been used bhyve with vm-byhve but the guest os always get terrible network internet speed (avg. in 2 Mbps) than the host (avg. in 1 Gbps). I have tried using debian, alpine, and fedora coreos and it always got terrible network speed. My topologies are using NAT (because I only have 1 public ip address).

Code:
| Host      |           | vm-public      |             | guestos.       |
| a.b.c.d/24| ----------| 10.100.200.1/24| ------------| 10.100.200.2/24|

My guestos configuration file with vm-bhyve is below here

Code:
loader="uefi-custom"
debug-"yes"
cpu=2
memory=1G
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"
uuid="7764cc2f-09c6-11ef-9d5c-00163c9bc5e7"
network0_mac="58:9c:fc:0a:ce:2d"
graphics="yes"
#graphics_port="5999"
#graphics_listen="0.0.0.0"
graphics_res="1600x900"
graphics_wait="auto"
xhci_mouse="yes"
#bhyve_options="-A -s 20,hda,play=/dev/dsp4,rec=/dev/dsp7"

Here is my ifconfig

Code:
vtnet0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500
    options=4c07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,TXCSUM_IPV6>
    ether 00:16:3c:9b:c5:e7
    inet a.b.c.d. netmask 0xffffff00 broadcast a.b.c.e
    inet6 fe80::216:3cff:fe9b:c5e7%vtnet0 prefixlen 64 scopeid 0x1
    inet6 a:b:c::d:e prefixlen 64
    media: Ethernet autoselect (10Gbase-T <full-duplex>)
    status: active
    nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lo0: flags=1008049<UP,LOOPBACK,RUNNING,MULTICAST,LOWER_UP> metric 0 mtu 16384
    options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
    inet 127.0.0.1 netmask 0xff000000
    inet6 ::1 prefixlen 128
    inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
    groups: lo
    nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
bastille0: flags=8008<LOOPBACK,MULTICAST> metric 0 mtu 16384
    options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
    groups: lo
    nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
vm-public: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500
    options=0
    ether 22:c8:de:6d:3d:e8
    inet 10.100.200.1 netmask 0xffffff00 broadcast 10.100.200.255
    id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
    maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
    root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
    member: tap0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
            ifmaxaddr 0 port 5 priority 128 path cost 2000000
    groups: bridge vm-switch viid-4c918@
    nd6 options=9<PERFORMNUD,IFDISABLED>
tap0: flags=1008943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500
    description: vmnet/containeros/0/public
    options=80000<LINKSTATE>
    ether 58:9c:fc:00:28:70
    groups: tap vm-port
    media: Ethernet 1000baseT <full-duplex>
    status: active
    nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
    Opened by PID 89630

and here is my /etc/pf.conf

Code:
ext_if="vtnet0"

set block-policy return
scrub in on $ext_if all fragment reassemble
set skip on lo

table <jails> persist
nat on $ext_if from <jails> to any -> ($ext_if:0)
rdr-anchor "rdr/*"

nat on $ext_if from vm-public:network to any -> ($ext_if:0)
rdr pass on $ext_if proto tcp from any to $ext_if port 9000 -> 10.100.200.2 port 9000
rdr pass on $ext_if proto tcp from any to $ext_if port 40022 -> 10.100.200.2 port 22
#block in all
pass inet6 all
pass out quick keep state
antispoof for $ext_if inet
pass in inet proto tcp from any to any port ssh flags S/SA keep state
pass in inet proto tcp from any to any port 5900 flags S/SA keep state

I think the culprit of my case is because my host network interface is virtio network device. Here is my pciconf -lv

Code:
virtio_pci0@pci0:0:3:0:    class=0x020000 rev=0x00 hdr=0x00 vendor=0x1af4 device=0x1000 subvendor=0x1af4 subdevice=0x0001
    vendor     = 'Red Hat, Inc.'
    device     = 'Virtio network device'
    class      = network
    subclass   = ethernet
virtio_pci1@pci0:0:4:0:    class=0x010000 rev=0x00 hdr=0x00 vendor=0x1af4 device=0x1001 subvendor=0x1af4 subdevice=0x0002
    vendor     = 'Red Hat, Inc.'
    device     = 'Virtio block device'
    class      = mass storage
    subclass   = SCSI
virtio_pci2@pci0:0:5:0:    class=0x00ff00 rev=0x00 hdr=0x00 vendor=0x1af4 device=0x1002 subvendor=0x1af4 subdevice=0x0005
    vendor     = 'Red Hat, Inc.'
    device     = 'Virtio memory balloon'
    class      = old

Any solution for my case? Thanks
 
This makes me think of these sorts of posts:


But might be complete red herring!
 
This makes me think of these sorts of posts:


But might be complete red herring!
thanks

solved by disabling tcp offliad and checksum offload

ifconfig xxx -txcsum -rxcsum -tso -lero
 
Back
Top