bhyve How can I improve Windows Server 2019 performance?

I have go my hands on a dual-CPU (Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz + 128GB RAM) Dell Precision 7280 Workstation and decided to try Bhyve on it. After much struggling and trial-and-error, I manged to install Windows Server 2019.

This is my configuration for the VM:

Code:
uuid="..."

loader="uefi"

cpu="32"
cpu_sockets="1"
cpu_cores="32"

memory="64GB"

passthru0="24/0/0=3:0" # Intel Ethernet Card
passthru1="158/0/0=3:1" # NVME Disk

graphics="yes"
graphics_port="5902"
graphics_res="1280x720p"
graphics_wait="yes"
xhci_mouse="yes"

As you can see, the NIC and the NVME disk have been passed successfully to the VM.

My problem is that I am not happy with the performance. What can be done about that?
 
My problem is that I am not happy with the performance.
Define performance. What? CPU power? I/O? Network? And how did you measure it? Also note that the graphics run on an emulated "VGA" controller, the EFI framebuffer isn't super fast either, and so is the VNC connection. So it may 'feel' sluggish, but that's only the GUI.
 
Define performance. What? CPU power? I/O? Network? And how did you measure it? Also note that the graphics run on an emulated "VGA" controller, the EFI framebuffer isn't super fast either, and so is the VNC connection. So it may 'feel' sluggish, but that's only the GUI.
I'm sorry I forgot to mention I am connected to the said VM via RDP; it feels much more sluggish on VNC, but that does not bother me. It just it not responsive. I was hoping, which such resources at its command, it would give me great performance. I am planning to create two such VMs on this hardware. I have set up Windows 10 machines on VMWare 7.0.3U with fewer resources and they work great.

Thinking again, I remember those Windows 10 VMs have VMWare display adapters, and neither CPU power, nor I/O, nor the network seemed to be the issue with the Bhyve VM. Can I create a virtual display adapter of some kind? I was planning to passthrough a real graphics card anyway.
 
I have set up Windows 10 machines on VMWare 7.0.3U with fewer resources and they work great.
On the same machine? What type of disks are you using? With RAID? Maybe Windows 10 does have the better NVME drivers. Try using a disk image file instead of passing through disks maybe.
 
Instead of handing over the whole NVMe disk to windows, why not using a ZFS pool on NVMe (mirrored) and then use file-based disk images via the NVMe driver for the VM.
This is *by far* the fastest variant for storage on bhyve and the VM will gain full advantage of ZFS caching (and of course snapshots and everything that maintains data integrity, which NTFS doesn't give a damn about...).
Using zvols trades some I/O performance for more flexibility and actually telling ZFS that this is a virtual block storage (which it doesn't if using files). Klara Systems did some benchmarks [1] about that; don't know if this still is true for later ZFS versions though.

We're running 2 2019 servers that way and they are *a lot* more performant than back on KVM /w smartOS (when the VMs were deployed, installing windows on bhyve didn't work reliably on smartOS, hence we had to go with KVM).
You could even set 'sync = disabled' on the datasets holding the windows VM to speed up storage performance even more (because windows is issuing *tons* of unnecessary sync-writes when run in a VM).
You should also specifically define sockets=1 and only the appropriate number of 'numcpus' - windows tends to behave/perform badly with multiple virtual sockets in VMs (no idea about bare metal, IIRC it was some problem with their scheduler, so I expect it to be bad on bare metal too...)

[1] https://klarasystems.com/articles/virtualization-showdown-freebsd-bhyve-linux-kvm/
 
Instead of handing over the whole NVMe disk to windows, why not using a ZFS pool on NVMe (mirrored) and then use disk images via the NVMe driver for the VM.
This is *by far* the fastest variant for storage on bhyve and the VM will gain full advantage of ZFS caching (and of course snapshots and everything that maintains data integrity, which NTFS doesn't give a damn about...).
I can fully second the argument with a ZFS pool as a storage backend. However, in my recent tests on an amd epyc system using virtio in a windows vm was on average 20% faster than using nvme. We also compared them against Linux + KVM which was also significantly (~35%) faster, so we had to migrate some of our high-load windows systems to Linux KVM.
 
I can fully second the argument with a ZFS pool as a storage backend. However, in my recent tests on an amd epyc system using virtio in a windows vm was on average 20% faster than using nvme. We also compared them against Linux + KVM which was also significantly (~35%) faster, so we had to migrate some of our high-load windows systems to Linux KVM.
interesting. this is the complete opposite to my observations (back then).
Did you try adjusting the zfs blocksize (ashift) on the nvme pool to the emulated or even physical blocksize (usually 1M or even larger)? this yealds *immense* speedups as well as the record-/volblocksize of the dataset/zvol holding the VM image (file). The only caveat might be, that windows *still* doesn't work well with anything >512b - at least we had horrible performance and very frequent crashes and data corruption with >4k blocks and even some with 4k blocks and MSSQL running on the 2019 server.
512b blocksize for the VM of course poses an immense penalty, yet it still was the fastest solution for us in terms of IOPS. But the main takeaway we got: If you want proper performance from a VM one should simply avoid windows at all costs...
 
interesting. this is the complete opposite to my observations (back then).
...
But the main takeaway we got: If you want proper performance from a VM one should simply avoid windows at all costs...
we tried quite a lot of different combinations of settings, also changed the disks, and finally even the machine (although still an AMD CPU it was a Ryzen but I don't remember the exact version), but even after some tests it was obvious the that the difference was roughly the same with the machine. On Linux the results were a bit more fluctating, however, when we found a few options where performance was maximized we chose to go with Linux + KVM.

I can second your takeaway in well, plus I would add that if you want stability you should avoid windows ... but I won't go into detail here, this is definitely the wrong forum for making fun of that although I could tell you tons of funny stories (who cannot?).
 
  • Thanks
Reactions: sko
Back
Top