I'm running bhyve vms that show very bad I/O performance and I'm wondering what factors may be causing this?
A couple details:
I'd expected to get at least up to 100MB/s but I'm hitting some ceiling at 25% of host performance. What's a reference value I should be able to expect?
Anyone got any ideas, what I can tweak to improve the I/O performance? Short of ripping out the backing zfs storage, obviously.
A couple details:
- I'm testing with a simple
dd if=/dev/random of=test bs=1M status=progress
and yes, that's not very scientific. Since I'm seeing many factors worse I/O perf than expected, I assume bonnie or other tools will not get any better speed either, however. - host is 12.3-RELEASE-p5, ample RAM - zfs raidz2 with around 250MB/s I/O; disks are 4k native and aligned correctly (ashift=12); I/O on the host is consistently ok, no lags like they appear on the vm side
- guest is 13.1-RELEASE - limited to 1G RAM to simplify testing so I'm not looking at cache perf but "disk" activity; the guest has two disks: one OS and one 8GB test disk; both are backed by a zfs zvol and the guest formats them with UFS. os disk obviously runs with gpt, the test disk is plain UFS.
- I've tried different storage kinds: virtio-blk, ahci-hd and nvme - I'm getting between 35MB/s (ahci) and up to 60MB/s for nvme
- I've tried UEFI vs non-UEFI boot, tried the
/usr/share/examples/bhyve/vmrun.sh
script instead of my own but no improvement there - Watching I/O performance on the host via
zpool iostat 1
, I'm seeing "breaks" where the host kind of idles out while the guest is doing its writes - but not sure what to make of it. - I attempted to switch sectorsize with virtio-blk, i.e. I set it to 4096 to reflect 4k block size. The result was that the vm became more responsive but I/O got slower
- when running
dd
in the guest, I'm seeing "waiting" times, i.e. dd shows it's timer at 2s, then it sits there and about 30s later, it jumps to 32s and, sits again and updates again after i.e. another 6 seconds. It's like I/O is getting out to the host in "bursts"?
I'd expected to get at least up to 100MB/s but I'm hitting some ceiling at 25% of host performance. What's a reference value I should be able to expect?
Anyone got any ideas, what I can tweak to improve the I/O performance? Short of ripping out the backing zfs storage, obviously.