bhyve Experience from bhyve (FreeBSD 14.1) GPU passthrough with Windows 10 guest

I wanted to share my experiences from bhyve GPU passthrough with Windows 10 guest. Many tutorials covered the most of the steps I needed to get it working but there were a few details which I wanted to write down and share with the community. I hope that this post helps others struggling to get GPU passthrough working.

Please feel free to share your experience and comment my configuration.

Hardware: Lenovo ThinkCentre M910 Tiny
CPU: Intel i7-7700T
GPU: Intel HD Graphics 630
OS: FreeBSD 14.1-RELEASE-p2 (clean install)

Part I: Host preparation:

Identify GPU device which will be passed to the vm:
pciconf -lv
vga@pci0:0:2:0: class=0x030000 rev=0x04 hdr=0x00 vendor=0x8086 device=0x5912 subvendor=0x17aa subdevice=0x310b
vendor = 'Intel Corporation'
device = 'HD Graphics 630'
class = display
subclass = VGA

We see that vga@pci0:0:2:0 is the GPU and it uses PCI bus/slot/function 0/2/0.

The GPU needs to be detached from the host using ppt assignment. This can be done with the following entries to /boot/loader.conf (using the PCI bus/slot/function format above):
pptdevs="0/2/0"
vmm_load="YES"

From my experience, vmm_load="YES" must be included to loader.conf. Otherwise, the GPU won't detach from the host properly. In other words, it is NOT enough to have vm_enable="YES" in /etc/rc.conf (which will also load vmm.ko module but later stage during the boot process).

Reboot the host.

Verify that ppt is working as expected (vga0 has changed to ppt0):
pciconf -lv
ppt0@pci0:0:2:0: class=0x030000 rev=0x04 hdr=0x00 vendor=0x8086 device=0x5912 subvendor=0x17aa subdevice=0x310b
vendor = 'Intel Corporation'
device = 'HD Graphics 630'
class = display
subclass = VGA

Now the host GPU is ready for bhyve passthrough.

Part II: Virtual machine configuration

I am using vm-bhyve to manage virtual machines but these steps can be followed also without vm-bhyve.

Passthrough can be enabled with the following entry in the vm-bhyve virtual machine configuration:
passthru0="0/2/0=2:0"

The above vm-bhyve statement translates into the following bhyve argument (which can also be seen from vm-bhyve logs):
-s 2:0,passthru,0/2/0

Note that the above statement specifies that PCI device 0/2/0 is assigned to guest slot 2:0. Without 2:0 slot statement, I wasn't able to get the GPU detected by the VM. I got the idea to test 2:0 from https://infosys.beckhoff.com/englis...cat_bsd/12607678219.html&id=17880199243163018

However, I was wondering how to identify the correct slot for other GPUs? Ideas?

Passthrough requires memory wiring (bhyve -S flag). However, vm-bhyve adds the -S flag automatically when VM config includes passthru statement. If you are not using vm-bhyve, -S must be included manually.

Start the vm.

Part III: Update GPU drivers to get rid off error 43

I discovered that Windows detected the GPU and installed drivers automatically. However, after a few seconds, Windows Device Manager returned error 43 for the device.

It seems that Windows stock drivers for the GPU are quite old (2022). Therefore, I installed new (dated 5/2024) drivers from the Intel website using Intel installer.

Once the updated drivers have been installed, power off the VM and reboot the host.

Once the host comes back online, start the vm.

Now, Windows detects the GPU properly and does not display any error messages. Also 3D acceleration seems to be working fine.

I will be testing the system under heavy load to check how it works. I will report back if I encounter any further issues.
 
I was wondering how to identify the correct slot for other GPUs? Ideas?
I may be totally wrong, but I think there are no “correct” slots, as long as there are no conflicts with the numbers chosen by vm-bhyve for other “-s” things. Dunno whether it smart enough in that regard. I believe it's safer to take higher numbers (your “2:0” looks somewhat risky to me).

It's interesting that you were able to passthru an integrated GPU; I'd have doubts associated with the bus/slot/function 0/2/0.

My GPU has two functions, and I passthru them like this:
Code:
passthru0="10/0/0=8:0"
passthru1="10/0/1=8:1"

Code:
ppt1@pci0:10:0:0:    class=0x030000 rev=0xa1 hdr=0x00 vendor=0x10de device=0x220a subvendor=0x10de subdevice=0x1616
    vendor     = 'NVIDIA Corporation'
    device     = 'GA102 [GeForce RTX 3080 12GB]'
    class      = display
    subclass   = VGA
ppt2@pci0:10:0:1:    class=0x040300 rev=0xa1 hdr=0x00 vendor=0x10de device=0x1aef subvendor=0x10de subdevice=0x1616
    vendor     = 'NVIDIA Corporation'
    device     = 'GA102 High Definition Audio Controller'
    class      = multimedia
    subclass   = HDA
 
Last edited:
Thanks for posting because I have a very similar setup but I'm struggling to get iGPU pass-through to work.

I have:
Hardware: Lenovo ThinkCentre M720q Tiny
CPU: Intel i7-9700T
GPU: Intel HD Graphics 630
OS: FreeBSD 14.1-RELEASE-p2 (clean install)

Bash:
# uname -a
FreeBSD ghost 14.1-RELEASE FreeBSD 14.1-RELEASE releng/14.1-n267679-10e31f0946d8 GENERIC amd64
# freebsd-version -kru
14.1-RELEASE
14.1-RELEASE
14.1-RELEASE-p2
# pkg info -I edk2-bhyve
edk2-bhyve-g202308_4           EDK2 Firmware for bhyve
#

I can get Windows 10 to start but I always encounter error Code 43 for the Intel(R) UHD Graphics 630 driver.
I've tried various versions of the driver, currently installed: v31.0.101.2128 dated 03/05/2024

Win10 Device manager.png


vm-bhyve, with "debug" enabled, logs this args being used with bhyve:

Code:
Jul 10 21:32:19:  [bhyve options: -c 4,sockets=1,cores=4 -m 16G -Hwl bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd -A -U ede8888c-b2ee-11ed-a596-e86a64db3554 -S]
Jul 10 21:32:19:  [bhyve devices: -s 0,hostbridge -s 31,lpc -s 4:0,nvme,/zroot/bhyve/win10/disk0.img -s 5:0,virtio-net,tap2,mac=58:9c:fc:00:a2:f4 -s 6:0,passthru,3/0/0 -s 2:0,passthru,0/2/0 -s 7:0,passthru,0/20/0 -s 8:0,fbuf,tcp=0.0.0.0:5900,w=1920,h=1080 -s 9:0,xhci,tablet]
Jul 10 21:32:19:  [bhyve console: -l com1,/dev/nmdm-win10.1A]

I had this working with Corvin-patched-FreeBSD-13.2 but since upgrading to FreeBSD-14.1 it just doesn't work any more.

Could you post your vm-bhyve conf file and version of edk2-bhyve package please?

Any ideas?
 
Last edited:
Intersting indeed. Have you tried starting the VM right after the host has been restarted? Also, it seems that I have newer edk2-bhyve installed:

Bash:
uname -a
FreeBSD limpkin 14.1-RELEASE FreeBSD 14.1-RELEASE releng/14.1-n267679-10e31f0946d8 GENERIC amd64

freebsd-version -kru
14.1-RELEASE
14.1-RELEASE
14.1-RELEASE-p2

pkg info -I edk2-bhyve
edk2-bhyve-g202308_5           EDK2 Firmware for bhyve

Here is my vm-bhyve vm config:
Bash:
loader="uefi"
graphics="yes"
xhci_mouse="yes"

memory=16G

cpu=8
cpu_sockets=1
cpu_cores=8
cpu_threads=1

# put up to 8 disks on a single ahci controller.
# without this, adding a disk pushes the following network devices onto higher slot numbers,
# which causes windows to see them as a new interface
ahci_device_limit="8"

# ideally this should be changed to virtio-net and drivers installed in the guest
# e1000 works out-of-the-box
#network0_type="e1000"
network0_type="virtio-net"
network0_device="tap0"

disk0_type="nvme"
disk0_name="disk0.img"
disk0_dev="file"

disk1_name="disk1.img"
disk1_type="nvme"
disk1_dev="file"

disk2_name="disk2.img"
disk2_type="nvme"
disk2_dev="file" 

passthru0="0/2/0=2:0"

# windows expects the host to expose localtime by default, not UTC
utctime="no"

uuid="hidden from the post"
network0_mac="hidden from the post"

bhyve_options="-A -H -P"

It seems that we have eactly the same Intel GPU griver:
Image 12.7.2024 at 11.40.jpeg
 
Thanks for the info.

I upgraded edk2-bhyve:
Code:
# pkg info -I edk2-bhyve
edk2-bhyve-g202308_5           EDK2 Firmware for bhyve
# pkg info -I vm-bhyve
vm-bhyve-1.5.0_1               Management system for bhyve virtual machines
#

My vm-bhyve config file now looks like this:
Code:
loader="uefi"
graphics="yes"
xhci_mouse="yes"

memory=16G
# Wired memory (bhyve "-S" flag) needed if using PCI pass-thru
# wired_memory=yes

cpu=4
cpu_sockets=1
cpu_cores=4
cpu_threads=1

# put up to 8 disks on a single ahci controller.
# without this, adding a disk pushes the following network devices onto higher slot numbers,
# which causes windows to see them as a new interface
ahci_device_limit="8"

# ideally this should be changed to virtio-net and drivers installed in the guest
# e1000 works out-of-the-box
#network0_type="e1000"
network0_type="virtio-net"
# network0_device="tap0"
network0_switch="public"
network0_mac="58:9c:fc:00:a2:f4"

disk0_type="nvme"
disk0_name="disk0.img"

passthru0="0/2/0=2:0"

# windows expects the host to expose localtime by default, not UTC
utctime="no"

debug="yes"
uuid="ede8888c-b2ee-11ed-a596-e86a64db3554"

bhyve_options="-A -H -P"


After restarting the host, vm-bhyve.log shows these args being passed to bhyve:
Code:
Jul 12 12:54:19:  [bhyve options: -c 4,sockets=1,cores=4,threads=1 -m 16G -Hwl bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd -A -H -P -U ede8888c-b2ee-11ed-a596-e86a64db3554 -S]
Jul 12 12:54:19:  [bhyve devices: -s 0,hostbridge -s 31,lpc -s 4:0,nvme,/zroot/bhyve/win10/disk0.img -s 5:0,virtio-net,tap0,mac=58:9c:fc:00:a2:f4 -s 2:0,passthru,0/2/0 -s 7:0,fbuf,tcp=0.0.0.0:5900 -s 8:0,xhci,tablet]
Jul 12 12:54:19:  [bhyve console: -l com1,/dev/nmdm-win10.1A]

Connected to Windows VM, logged in, which immediately triggered an "IRQL_NOT_LESS_OR_EQUAL" blue screen!

Restarted VM, logged in, still seeing this:
1720786220703.png

Another difference seems to be these 4 broken COM ports?!

I guess next step is to create a brand new VM from scratch.
I really don't want to reinstall the FreeBSD host from scratch!
 

Attachments

  • 1720785953522.png
    1720785953522.png
    37.9 KB · Views: 156
Totally fresh install of Windows 10 22H2 - same issues: both the Intel(R) UHD Graphics 630 "Code 43" and the 4 broken COM ports.
That suggests the cause is more likely to be on the FreeBSD side or some strange hardware conflict?

More information for comparison, from the view inside of the Windows VM:
1720815126161.png

1720815165561.png

1720815228826.png


Beginning to suspect OVMF / edk2-bhyve, especially after reading this:

 
Inspired by the closed bug report above I checked out 2023Q3 branch of freebsd-ports and built edk2-bhyve-g202202_10 for FreeBSD 14.1.

The broken COM ports have gone away, matching the experience of the poster at the end of bug 274389, but interestingly the memory addresses are wildly different too:
1720860371565.png


1720860425718.png


1720860505575.png



Also tried adding "fwcfg=qemu" when starting using raw bhyve command, per Corvin's slide during his presentation at EuroBSDCon 2023:
1720863908294.png

Also tried adding the "pcireg" options above.

Even tried installing Lenovo's Intel GPU driver, but just like all the other drivers, the device's properties page claims to be ok immediately after install yet there's actually no video and it's back to code 43 post-reboot.

Interesting that the memory ranges claimed immediately after driver install are contiguous and different:
1720864281362.png



0x80000000 - 0x810FFFFFF is way past any of the memory ranges allocated post-reboot.
So tried reducing the Windows VM memory from 16G to 2G (to match Corvin's presentation also) AND reinstall latest Intel driver - still no luck.

Maybe the missing piece is the Intel GOP Driver to reset/configure the iGPU during VM boot?
 
dmesg output:

Code:
---<<BOOT>>---
Copyright (c) 1992-2023 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
        The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 14.1-RELEASE releng/14.1-n267679-10e31f0946d8 GENERIC amd64
FreeBSD clang version 18.1.5 (https://github.com/llvm/llvm-project.git llvmorg-18.1.5-0-g617a15a9eac9)
VT(efifb): resolution 2560x1440
CPU: Intel(R) Core(TM) i7-9700T CPU @ 2.00GHz (2000.00-MHz K8-class CPU)
  Origin="GenuineIntel"  Id=0x906ed  Family=0x6  Model=0x9e  Stepping=13
  Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
  Features2=0x7ffafbff<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,OSXSAVE,AVX,F16C,RDRAND>
  AMD Features=0x2c100800<SYSCALL,NX,Page1GB,RDTSCP,LM>
  AMD Features2=0x121<LAHF,ABM,Prefetch>
  Structured Extended Features=0x29c6fbf<FSGSBASE,TSCADJ,SGX,BMI1,HLE,AVX2,SMEP,BMI2,ERMS,INVPCID,RTM,NFPUSG,MPX,RDSEED,ADX,SMAP,CLFLUSHOPT,PROCTRACE>
  Structured Extended Features2=0x40000000<SGXLC>
  Structured Extended Features3=0xbc000e00<MCUOPT,MD_CLEAR,IBPB,STIBP,L1DFL,ARCH_CAP,SSBD>
  XSAVE Features=0xf<XSAVEOPT,XSAVEC,XINUSE,XSAVES>
  IA32_ARCH_CAPS=0x20a0cab<RDCL_NO,IBRS_ALL,SKIP_L1DFL_VME,MDS_NO,TSX_CTRL>
  VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID
  TSC: P-state invariant, performance statistics
real memory  = 68717379584 (65534 MB)
avail memory = 66678468608 (63589 MB)
Event timer "LAPIC" quality 600
ACPI APIC Table: <LENOVO TC-M1U  >
FreeBSD/SMP: Multiprocessor System Detected: 8 CPUs
FreeBSD/SMP: 1 package(s) x 8 core(s)
random: registering fast source Intel Secure Key RNG
random: fast provider: "Intel Secure Key RNG"
random: unblocking device.
ioapic0 <Version 2.0> irqs 0-119
Launching APs: 2 3 1 4 5 6 7
random: entropy device external interface
kbd1 at kbdmux0
efirtc0: <EFI Realtime Clock>
efirtc0: registered as a time-of-day clock, resolution 1.000000s
smbios0: <System Management BIOS> at iomem 0x9b8b4000-0x9b8b401e
smbios0: Version: 3.2, BCD Revision: 2.8
aesni0: <AES-CBC,AES-CCM,AES-GCM,AES-ICM,AES-XTS>
acpi0: <LENOVO TC-M1U>
acpi0: Power Button (fixed)
cpu0: <ACPI CPU> on acpi0
hpet0: <High Precision Event Timer> iomem 0xfed00000-0xfed003ff on acpi0
Timecounter "HPET" frequency 24000000 Hz quality 950
Event timer "HPET" frequency 24000000 Hz quality 550
atrtc1: <AT realtime clock> on acpi0
atrtc1: Warning: Couldn't map I/O.
atrtc1: registered as a time-of-day clock, resolution 1.000000s
Event timer "RTC" frequency 32768 Hz quality 0
attimer0: <AT timer> port 0x40-0x43,0x50-0x53 irq 0 on acpi0
Timecounter "i8254" frequency 1193182 Hz quality 0
Event timer "i8254" frequency 1193182 Hz quality 100
Timecounter "ACPI-fast" frequency 3579545 Hz quality 900
acpi_timer0: <24-bit timer at 3.579545MHz> port 0x1808-0x180b on acpi0
pcib0: <ACPI Host-PCI bridge> port 0xcf8-0xcff on acpi0
pci0: <ACPI PCI bus> on pcib0
pcib1: <ACPI PCI-PCI bridge> at device 1.0 on pci0
pci1: <ACPI PCI bus> on pcib1
nvme0: <Generic NVMe Device> mem 0xb1300000-0xb1303fff at device 0.0 on pci1
ppt0 port 0x3000-0x303f mem 0xb0000000-0xb0ffffff,0xa0000000-0xafffffff at device 2.0 on pci0

Interesting lines from above:
Code:
VT(efifb): resolution 2560x1440
ppt0 port 0x3000-0x303f mem 0xb0000000-0xb0ffffff,0xa0000000-0xafffffff at device 2.0 on pci0

The memory addresses assigned to the ppt device for the iGPU don't match any I've seen within the Windows VM. Should they?

vt seems to detect my monitor resolution even though I have this in loader.conf:
Code:
# Devices to pass through to VMs
# 0/2/0: Intel iGPU 630 UHD - using nullconsole to avoid host use
hw.vga.textmode=1
console="nullconsole"
# 3/0/0: Intel 8265 wifi & bluetooth
pptdevs="0/2/0 3/0/0"

# For byhve (must be after pptdevs)
vmm_load="YES"

Also checked that CSM is disabled in 'BIOS' settings.

Stumped!
 
This is the same before and after reboot of the host:

Code:
# pciconf -l -BbcevV ppt0
ppt0@pci0:0:2:0:        class=0x030000 rev=0x02 hdr=0x00 vendor=0x8086 device=0x3e98 subvendor=0x17aa subdevice=0x312d
    vendor     = 'Intel Corporation'
    device     = 'CoffeeLake-S GT2 [UHD Graphics 630]'
    class      = display
    subclass   = VGA
    bar   [10] = type Memory, range 64, base 0xb0000000, size 16777216, enabled
    bar   [18] = type Prefetchable Memory, range 64, base 0xa0000000, size 268435456, enabled
    bar   [20] = type I/O Port, range 32, base 0x3000, size 64, enabled
    cap 09[40] = vendor (length 12) Intel cap 0 version 1
    cap 10[70] = PCI-Express 2 root endpoint max data 128(128) FLR
                 max read 128
    cap 05[ac] = MSI supports 1 message
    cap 01[d0] = powerspec 2  supports D0 D3  current D0
    ecap 001b[100] = Process Address Space ID 1
    ecap 000f[200] = ATS 1
    ecap 0013[300] = Page Page Request 1
# pciconf -r ppt0 0:0x3f
3e988086 00100407 03000002 00000010
b0000004 00000000 a000000c 00000000
00003001 00000000 00000000 312d17aa
00000000 00000040 00000000 000001ff
#

Does that rule out the host not releasing the iGPU?

I also managed to build edk2-bhyve based on git tag edk2-stable202405 but still the error code appears.
 
Thank you hfvk0 for sharing the experience. Did you patch the kernel or bhyve before starting the vm with gpu passthough? Some comments in
said that both the kernel and the bhyve have to be patched for nvidia gpu passthrough.
 
Thank you hfvk0 for sharing the experience. Did you patch the kernel or bhyve before starting the vm with gpu passthough? Some comments in
said that both the kernel and the bhyve have to be patched for nvidia gpu passthrough.
I have been out for travelling and haven't been reading the thread for a while...

I didn't patch anything: I used stock FreeBSD 14.1-RELEASE.
 
I would like to setup Windows in Bhyve as well. Does the GPU actually need to be detached from the host meaning the host will no longer have access to the GPU? I was hoping I could run Windows 10 / 11 and then remote into it using RDP or VNC.
 
I would like to setup Windows in Bhyve as well. Does the GPU actually need to be detached from the host meaning the host will no longer have access to the GPU? I was hoping I could run Windows 10 / 11 and then remote into it using RDP or VNC.
I'm not 100 % sure but my understanding is that if you want 3D acceleration work in the VM. then you need to detach the GPU from the host and enable passthrough for the VM. There might be exceptions with certain (expensive) GPUs that support virtualization but I don't have experience from those.

However, if you use the VM with RDP or VNC and you don't need the 3D acceleration, you won't need to detach the GPU from the host (there is no need to passthrough the GPU either). This is how I use Windows Server on bhyve: VM is used over RDP and GPU is attached to the host (no the VM, and GPU passthrough is disabled). However, this setup leads to pretty poor graphics performance but whether this is issue or not... it depends on your use case.
 
tOsYZYny
Did you manage to setup a Windows VM under bhyve with GPU pass through? If so, please could you post your CPU (dmesg) & GPU details (pciconf -lv) and maybe your loader.conf and bhyve command / vm-bhyve config file?

I'm still unable to get this working and would like to see other people's configs in case there's something different I'm doing wrong. No matter what I do, I still suffer from error "Code 43" on the Intel GPU drivers inside Windows 10. Upgrading to FreeBSD 14.2-RELEASE hasn't helped.

Quick recap of my setup:
Hardware: Lenovo ThinkCentre M720q Tiny - UEFI version M1UKT74A - B360 PCH
CPU: Intel i7-9700T - 64GB memory
GPU: Intel HD Graphics 630
OS: FreeBSD 14.2-RELEASE
Dual monitors: 1x HDMI, 1x DisplayPort - switched via VPFET VP-SW200 KVM - which means monitors may not be connected at host or VM boot time

Code:
# pciconf -lBbcevV ppt0
ppt0@pci0:0:2:0:        class=0x030000 rev=0x02 hdr=0x00 vendor=0x8086 device=0x3e98 subvendor=0x17aa subdevice=0x312d
    vendor     = 'Intel Corporation'
    device     = 'CoffeeLake-S GT2 [UHD Graphics 630]'
    class      = display
    subclass   = VGA
    bar   [10] = type Memory, range 64, base 0xb0000000, size 16777216, enabled
    bar   [18] = type Prefetchable Memory, range 64, base 0xa0000000, size 268435456, enabled
    bar   [20] = type I/O Port, range 32, base 0x3000, size 64, enabled
    cap 09[40] = vendor (length 12) Intel cap 0 version 1
    cap 10[70] = PCI-Express 2 root endpoint max data 128(128) FLR
                 max read 128
    cap 05[ac] = MSI supports 1 message
    cap 01[d0] = powerspec 2  supports D0 D3  current D0
    ecap 001b[100] = Process Address Space ID 1
    ecap 000f[200] = ATS 1
    ecap 0013[300] = Page Page Request 1
# pkg info -I edk2-bhyve
edk2-bhyve-g202408             EDK2 Firmware for bhyve

I don't know what else to try or where the problem is. I guess it could any of:
  • UEFI config, like "Video Setup"
  • CPU/BCH unsupported in some way?
  • FreeBSD config files, like loader.conf sysctl.conf
  • Bhvye config
  • EDK2 - tried several versions
  • Wrong Intel driver in Windows 10 - but tried several
Anyone managed to get Intel UHD 630 pass-through to work, particularly on an Intel 9xxx CPU?
Can anyone help me? Or where else can I ask?
 
No, but I've passed through other hardware before through Bhyve and it works as expected. I don't want to go that route here because I want to keep the GPU for the host system. If I go that route in the future, I would install another video card and then allocate that specifically to the Windows instance.

Hmm, so when you pass through the device, I believe that is done through Bhyve: 0:2:0 as 0/2/0.
 
Another update in the hope someone else has managed to get Intel iGPU pass-through to work on FreeBSD-14.2 and has some ideas...

I have 2nd system:
Hardware: Lenovo ThinkCentre M920q Tiny - UEFI version M1UKT74A - Q370 PCH
CPU: Intel i5-8500T - 32GB memory
GPU: Intel HD Graphics 630
OS: FreeBSD 14.2-RELEASE

GVT-d pass-through works when I use Corvin patched FreeBSD-13.0R src and patched OVMF UEFI_CODE.fd - with and without "-A" flag to Bhyve!

Regarding "-A" flag, it seems Bhyve-generated ACPI tables describe COM1 to COM4, whereas OVMF-supplied tables only describe COM1 to COM2.
If I edit Bhyve src to only describe COM1 to COM2 then my COM port conflicts disappear.
Some other people have the same issue - but why does it work for some people and not others?

Doesn't work with a fresh install of FreeBSD-14.2 -- same issues as above (code 43).
Main suspect seems to be OpRegion or graphics stolen memory details (in E820 table?) not making it from Bhyve to OVMF, or maybe OVMF isn't reserving the memory?

I added a call to e820_dump_table() to Bhyve src, which emits this:
Code:
E820 map:
  (   0) [               0,            a0000] RAM
  (   1) [          100000,         98c72018] RAM
  (   2) [        98c72018,         98c74018] NVS
  (   3) [        98c74018,         9b800000] RAM
  (   4) [        9b800000,         9f800000] Reserved
  (   5) [        9f800000,         c0000000] RAM
  (   6) [       100000000,        140000000] RAM
where 0x98c72018 to 0x98c74018 is the OpRegion and 0x9b800000 - 0x9f800000 is the graphics stolen memory as it matches the output of sysctl hw.intel_graphics_stolen_base
but no mention of either in the debug output of OVMF??

OVMF really wants to assign super high addresses for the BARs:
Code:
PciBus: Resource Map for Root Bridge PciRoot(0x0)
Type =   Io16; Base = 0x2000;   Length = 0x1000;        Alignment = 0xFFF
   Base = 0x2000;       Length = 0x40;  Alignment = 0x3F;       Owner = PCI [00|02|00:20]
Type =  Mem32; Base = 0xC0000000;       Length = 0x1100000;     Alignment = 0xFFFFFF
   Base = 0xC0000000;   Length = 0x1000000;     Alignment = 0xFFFFFF;   Owner = PCI [00|06|00:14]
   Base = 0xC1000000;   Length = 0x2000;        Alignment = 0x1FFF;     Owner = PCI [00|04|00:20]
   Base = 0xC1002000;   Length = 0x1000;        Alignment = 0xFFF;      Owner = PCI [00|07|00:10]
   Base = 0xC1003000;   Length = 0x80;  Alignment = 0xFFF;      Owner = PCI [00|06|00:10]
Type =  Mem64; Base = 0x800000000;      Length = 0x11100000;    Alignment = 0xFFFFFFF
   Base = 0x800000000;  Length = 0x10000000;    Alignment = 0xFFFFFFF;  Owner = PCI [00|02|00:18]; Type = PMem64
   Base = 0x810000000;  Length = 0x1000000;     Alignment = 0xFFFFFF;   Owner = PCI [00|02|00:10]
   Base = 0x811000000;  Length = 0x4000;        Alignment = 0x3FFF;     Owner = PCI [00|04|00:10]
The Intel iGPU BARs are the ones with lines ending Owner = PCI [00|02|00 as in pci0:0:2:0.

FreeBSD view of the same PCI device:
Code:
ppt0@pci0:0:2:0:        class=0x030000 rev=0x02 hdr=0x00 vendor=0x8086 device=0x3e98 subvendor=0x17aa subdevice=0x312d
    vendor     = 'Intel Corporation'
    device     = 'CoffeeLake-S GT2 [UHD Graphics 630]'
    class      = display
    subclass   = VGA
    bar   [10] = type Memory, range 64, base 0xb0000000, size 16777216, enabled
    bar   [18] = type Prefetchable Memory, range 64, base 0xa0000000, size 268435456, enabled
    bar   [20] = type I/O Port, range 32, base 0x3000, size 64, enabled
    cap 09[40] = vendor (length 12) Intel cap 0 version 1
    cap 10[70] = PCI-Express 2 root endpoint max data 128(128) FLR
                 max read 128
    cap 05[ac] = MSI supports 1 message
    cap 01[d0] = powerspec 2  supports D0 D3  current D0
    ecap 001b[100] = Process Address Space ID 1
    ecap 000f[200] = ATS 1
    ecap 0013[300] = Page Page Request 1

I'm wondering if the OpRegion and stolen graphics memory changes were not included FreeBSD-14?
I guess I'll keep plugging away...
 
After lots of rebuilding sysutils/edk2 and /usr/src/usr.sbin/bhyve with extra debug log statements I think I finally have it working!

TL;DR: The OVMF UEFI from edk2 wasn't reserving the OpRegion or "Stolen Graphics Memory" (GSM) and so the Intel UHD 630 iGPU driver was refusing to start.

To fix:
  • patch sysutils/edk2 (e.g. version g202308_5) with the diff from Corvin's message to the EDK2 discussion group
    • this adds the functionality to OVMF to look for the OpRegion and GSM details passed from Bhyve via E820 table inside QEMU-style FwCfg
  • ensure you configure your Bhyve guest to use 'qemu' FwCfg via Bhyve arg -o lpc.fwcfg=qemu ← add this your bhyve_options if using sysutils/vm-bhyve
  • highly recommend to map the iGPU to the same PCI address, e.g. -s 2,passthru,0/2/0=2:0 - or passthru0="0/2/0=2:0" if using sysutils/vm-bhyve
The remote VNC fbuf might go blank if it works, so consider configuring your Windows VM to allow RDP first (and test it works) before passing through your iGPU.



More details for anyone interested:
The big giveaway was reading the OVMF debug log, noticing the memory map being updated and not seeing any mention of OpRegion (0x98c72018) or GSM (0x9b800000).
Adding debugging to Bhyve itself showed that Bhyve was fetching OpRegion and GSM addresses, building the E820 table, adding that to QemuFwCfg, etc.
Adding debugging to OVMF showed that the E820 table wasn't being consumed and hence the regions not being reserved.

If you ever want to enable debug logs for OVMF, open up /usr/ports/sysutils/edk2/Makefile, look for the section starting .if ${FLAVOR} == bhyve and:
  • change PLAT_TARGET= RELEASE to PLAT_TARGET= DEBUG
  • insert a new line after the above with: PLAT_ARGS+= -D DEBUG_ON_SERIAL_PORT=TRUE
Then rebuild via:
  • make FLAVOR=bhyve clean extract patch
  • apply Corvin's patches (see attached): (cd work-bhyve/edk2-edk2-stable202308/ && patch) < edk2-bhyve-CorvinK-OVMF-patches.diff
  • maybe adjust gEfiMdePkgTokenSpaceGuid.PcdDebugPrintErrorLevel in OvmfPkg/Bhyve/BhyveX64.dsc line ~483
  • make FLAVOR=bhyve build deinstall reinstall
Debug output will appear via whatever you have Bhyve attach to COM1, e.g. -l com1,stdio or debug="yes" and vm console win10 if you're using sysutils/vm-bhyve

I'm still puzzled as to how this works for anyone else without the patch or why the patch hasn't made it into edk2-bhyve.
 

Attachments

Last edited:
I wanted to share my experiences from bhyve GPU passthrough with Windows 10 guest. Many tutorials covered the most of the steps I needed to get it working but there were a few details which I wanted to write down and share with the community. I hope that this post helps others struggling to get GPU passthrough working.

Please feel free to share your experience and comment my configuration.

Hardware: Lenovo ThinkCentre M910 Tiny
CPU: Intel i7-7700T
GPU: Intel HD Graphics 630
OS: FreeBSD 14.1-RELEASE-p2 (clean install)

Part I: Host preparation:

Identify GPU device which will be passed to the vm:
pciconf -lv
vga@pci0:0:2:0: class=0x030000 rev=0x04 hdr=0x00 vendor=0x8086 device=0x5912 subvendor=0x17aa subdevice=0x310b
vendor = 'Intel Corporation'
device = 'HD Graphics 630'
class = display
subclass = VGA

We see that vga@pci0:0:2:0 is the GPU and it uses PCI bus/slot/function 0/2/0.

The GPU needs to be detached from the host using ppt assignment. This can be done with the following entries to /boot/loader.conf (using the PCI bus/slot/function format above):
pptdevs="0/2/0"
vmm_load="YES"

From my experience, vmm_load="YES" must be included to loader.conf. Otherwise, the GPU won't detach from the host properly. In other words, it is NOT enough to have vm_enable="YES" in /etc/rc.conf (which will also load vmm.ko module but later stage during the boot process).

Reboot the host.

Verify that ppt is working as expected (vga0 has changed to ppt0):
pciconf -lv
ppt0@pci0:0:2:0: class=0x030000 rev=0x04 hdr=0x00 vendor=0x8086 device=0x5912 subvendor=0x17aa subdevice=0x310b
vendor = 'Intel Corporation'
device = 'HD Graphics 630'
class = display
subclass = VGA

Now the host GPU is ready for bhyve passthrough.

Part II: Virtual machine configuration

I am using vm-bhyve to manage virtual machines but these steps can be followed also without vm-bhyve.

Passthrough can be enabled with the following entry in the vm-bhyve virtual machine configuration:
passthru0="0/2/0=2:0"

The above vm-bhyve statement translates into the following bhyve argument (which can also be seen from vm-bhyve logs):
-s 2:0,passthru,0/2/0

Note that the above statement specifies that PCI device 0/2/0 is assigned to guest slot 2:0. Without 2:0 slot statement, I wasn't able to get the GPU detected by the VM. I got the idea to test 2:0 from https://infosys.beckhoff.com/englis...cat_bsd/12607678219.html&id=17880199243163018

However, I was wondering how to identify the correct slot for other GPUs? Ideas?

Passthrough requires memory wiring (bhyve -S flag). However, vm-bhyve adds the -S flag automatically when VM config includes passthru statement. If you are not using vm-bhyve, -S must be included manually.

Start the vm.

Part III: Update GPU drivers to get rid off error 43

I discovered that Windows detected the GPU and installed drivers automatically. However, after a few seconds, Windows Device Manager returned error 43 for the device.

It seems that Windows stock drivers for the GPU are quite old (2022). Therefore, I installed new (dated 5/2024) drivers from the Intel website using Intel installer.

Once the updated drivers have been installed, power off the VM and reboot the host.

Once the host comes back online, start the vm.

Now, Windows detects the GPU properly and does not display any error messages. Also 3D acceleration seems to be working fine.

I will be testing the system under heavy load to check how it works. I will report back if I encounter any further issues.
Just ran to few issues and wanted to document the solution here... at some point I noticed, that the 3D acceleration was not actually working. Solution:

1. Download the latest BIOS from Lenovo and upgrade the system BIOS.
2. Copy the BIOS rom ( in my case IMAGEM1A.rom) to a temporary folder and extract it using UEFIExtract:
Code:
UEFIExtract.exe IMAGEM1A.rom all

NOTE: The above step was performed on Windows 11.

3. Find the Intel GOP Driver from the extracted folder/subfolders:
Code:
grep -Ur --include="*.bin" -l "GOP"
grep -Ur --include="*.bin" -l "VGA"
The above commands should give a hint which file contains the GOP driver. In my case, the file was called body.bin

NOTE: This step was performed on FreeBSD 14.2.

4. Rename the file to something you remember, for example:
Code:
mv body.bin thinkstationM910-M1AKT59A.bin
5. Add the following entry to bhyve configuration (of course, adjust according to your system):
Code:
-S -s 2,passthru,0/2/0,rom=/path/to/thinkstationM910-M1AKT59A.bin

Remember to have -S for memory wiring.

6. Reinstall drivers on the Windows guest.
7. Confirm that 3D acceleration is working on the Windows guest by running:
Code:
dxdiag
 
Back
Top