bhyve Current state of bhyve Nvidia passthrough?

If I want to use bhyve-lin for a Windows vm it does not work : Windows reports error 43.
Ok so changing the identifier to KVMKVMKVM breaks windows, I'll check that out.

And regarding the patches. I've already rebased them on the latest release and shared the patch file. It's so few lines you could just apply it manually and generate a new file.
 
I've already rebased them on the latest release and shared the patch file.
But that's for 14.1:
I forgot to rename read_config to pci_host_read_config, I'm not able to edit the previous post so here is a complete patch file to apply the same way. Just tested building 14.1-RELEASE-p6 and nvidia gpu is detected in bhyve.
The approach previously described really messes things up as 14.0 is no longer supported by ports and you'll get funny errors.

So get the patch, clone 14.1-release branch, apply patch, build and install kernel then build/install /include, vmm, bhyve, bhyvectl, bhyveload.
 
Thanks for the patch - but you can easily see why it didn't make it into 14.0. With it applied although Linux passthrough now works, attempting another VM start afterwards almost always results in a hard lock of the entire system at least here (Ivy Bridge, C602 chipset).

The hypervisor signature change isn't required, only the modifications in the bhyve directory. I don't remember it being this unstable in some pre 14.0 builds. It's also currently only Linux that's affected, without the patch Windows and FreeBSD VMs are quite happy to switch between each other, at least in the case of an Nvidia CPU with proprietary (Windows) and open source (FreeBSD) drivers on my system.

Unfortunately Linux passthrough is what I need, because I'm trying to compare FreeBSD WINE to Linux WINE, as I suspect Linux WINE is more compatible than FreeBSD (even for the same version).


This is what was missing to BHYVE to be able to passthru the nvidia GPU to a Linux VM :

KVM:


[ 0.000132] MTRR default type: write-back
[ 0.000132] MTRR fixed ranges enabled:
[ 0.000133] 00000-9FFFF write-back
[ 0.000133] A0000-FFFFF uncachable
[ 0.000133] MTRR variable ranges enabled:
[ 0.000134] 0 base 00C0000000 mask FFC0000000 uncachable
[ 0.000134] 1 base 00B0000000 mask FFF0000000 uncachable
[ 0.000135] 2 base 0800000000 mask F800000000 uncachable
[ 0.000135] 3 disabled
[ 0.000135] 4 disabled
[ 0.000136] 5 disabled
[ 0.000136] 6 disabled
[ 0.000136] 7 disabled
[ 0.012530] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT

bhyve:


[ 0.000018] MTRR default type: uncachable
[ 0.000018] MTRR variable ranges disabled:
[ 0.000019] Disabled
[ 0.000019] x86/PAT: MTRRs disabled, skipping PAT initialization too.
[ 0.000021] CPU MTRRs all blank - virtualized system.
[ 0.000023] x86/PAT: Configuration [0-7]: WB WT UC- UC WB WT UC- UC

Looks like some more work to do on the bhyve side.

So,I presume that the patch even today is still correlated with what was missing at the beginning.
 
No he re-based his work. That means patches will apply smoothly. Codebase of CURRENT changes and patches mush be re-based.

I am taking ZioMario 's word on this action.

For the patch to no longer be required if would need to be COMMITED to CURRENT then if sucessful COMMITED to STABLE. From there RELEASEes are built from STABLE.
 
No he re-based his work. That means patches will apply smoothly. Codebase of CURRENT changes and patches mush be re-based.

I am taking ZioMario 's word on this action.

For the patch to no longer be required if would need to be COMMITED to CURRENT then if sucessful COMMITED to STABLE. From there RELEASEes are built from STABLE.

Now its easy as :

Code:
# cd /usr/
# mv src src-old
# git clone https://github.com/beckhoff/freebsd-src /usr/src
# cd /usr/src
# git checkout -f origin/

Screenshot_2025-03-01_21-13-14.jpg


Code:
# cd /usr/src/usr.sbin/bhyve
# make && make install
 
Now its easy as :

Code:
# cd /usr/
# mv src src-old
# git clone https://github.com/beckhoff/freebsd-src /usr/src
# cd /usr/src
# git checkout -f origin/

View attachment 21842

Code:
# cd /usr/src/usr.sbin/bhyve
# make && make install
Sorry for misunderstanding, but how do I get the changed code for release 14.2? It seems like you are grabbing both 14.2 version and 15.0. Are they the same?
1740862112833.png


I assume I still need to build kernel, install kernel, build world, make and install nvidia, includes etc. I am new to building a kernel so bear with me. Thank you very much your help is much appreciated.
 
Put all the files in one directory,change the extension from txt to sh and then run :

Code:
./build_branch.sh origin/phab/corvink/14.2/nvidia-wip --without-bhf --verbose
 

Attachments

Put all the files in one directory,change the extension from txt to sh and then run :

Code:
./build_branch.sh origin/phab/corvink/14.2/nvidia-wip --without-bhf --verbose
Wow, thanks so much for the help. I am trying this now. I am going to use Ubuntu in the virtual machine so I suspect before I build I need to edit the file sys/amd/vmm/x86.c from:
Code:
static const char bhyve_id[12] = "Bhyve Bhyve ";

to:
Code:
static const char bhyve_id[12] = "KVMKVMKVM\0\0\0";

I didn't see this step in your sh scripts.
 
Wow, thanks so much for the help. I am trying this now. I am going to use Ubuntu in the virtual machine so I suspect before I build I need to edit the file sys/amd/vmm/x86.c from:
Code:
static const char bhyve_id[12] = "Bhyve Bhyve ";

to:
Code:
static const char bhyve_id[12] = "KVMKVMKVM\0\0\0";

I didn't see this step in your sh scripts.

Nope. The KVM string is for a Windows vm and only if you want to passthru your nVidia GPU.
 
I've pulled, compiled and installed everything from 'origin/phab/corvink/14.2/nvidia-wip', masked an NVIDIA 4080 super:

Code:
pciconf -l -v
ppt0@pci0:1:0:0:    class=0x030000 rev=0xa1 hdr=0x00 vendor=0x10de device=0x2702 subvendor=0x103c subdevice=0x8cfd
    vendor     = 'NVIDIA Corporation'
    device     = 'AD103 [GeForce RTX 4080 SUPER]'
    class      = display
    subclass   = VGA
ppt1@pci0:1:0:1:    class=0x040300 rev=0xa1 hdr=0x00 vendor=0x10de device=0x22bb subvendor=0x103c subdevice=0x8cfd
    vendor     = 'NVIDIA Corporation'
    class      = multimedia
    subclass   = HDA

Created a vm config as below:

Code:
loader="grub"
cpu=4
memory=8G
wired_memory=yes
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"
uuid="1c818a1b-024e-11f0-b726-7c5758fb091c"
network0_mac="58:9c:fc:0b:f6:1b"
passthru0="1/0/0"
passthru1="1/0/1"

And installed ubuntu 24.04.2, alongside with NVIDIA Drivers. I see the GPU in linux:

Code:
pinver@bd:~$ lsmod | grep nvidia
nvidia_uvm           4956160  0
nvidia_drm            122880  0
nvidia_modeset       1355776  1 nvidia_drm
nvidia              54308864  2 nvidia_uvm,nvidia_modeset
video                  77824  1 nvidia_modeset

But I've this error during boot:

Code:
[    8.020625] [drm:nv_drm_load [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000006] Failed to allocate NvKmsKapiDevice
[    8.021449] [drm:nv_drm_register_drm_device [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000006] Failed to register device

So the card is not detected by nvidia-smi and companions. Any advice? The installed drivers are:

Code:
libnvidia-cfg1-550/noble-updates,noble-security,now 550.120-0ubuntu0.24.04.1 amd64 [installed,automatic]
libnvidia-common-550/noble-updates,noble-security,now 550.120-0ubuntu0.24.04.1 all [installed,automatic]
libnvidia-compute-550/noble-updates,noble-security,now 550.120-0ubuntu0.24.04.1 amd64 [installed,automatic]
libnvidia-decode-550/noble-updates,noble-security,now 550.120-0ubuntu0.24.04.1 amd64 [installed,automatic]
libnvidia-egl-wayland1/noble,now 1:1.1.13-1build1 amd64 [installed,automatic]
libnvidia-encode-550/noble-updates,noble-security,now 550.120-0ubuntu0.24.04.1 amd64 [installed,automatic]
libnvidia-extra-550/noble-updates,noble-security,now 550.120-0ubuntu0.24.04.1 amd64 [installed,automatic]
libnvidia-fbc1-550/noble-updates,noble-security,now 550.120-0ubuntu0.24.04.1 amd64 [installed,automatic]
libnvidia-gl-550/noble-updates,noble-security,now 550.120-0ubuntu0.24.04.1 amd64 [installed,automatic]
nvidia-compute-utils-550/noble-updates,noble-security,now 550.120-0ubuntu0.24.04.1 amd64 [installed,automatic]
nvidia-dkms-550/noble-updates,noble-security,now 550.120-0ubuntu0.24.04.1 amd64 [installed,automatic]
nvidia-driver-550/noble-updates,noble-security,now 550.120-0ubuntu0.24.04.1 amd64 [installed]
nvidia-firmware-550-550.120/noble-updates,noble-security,now 550.120-0ubuntu0.24.04.1 amd64 [installed,automatic]
nvidia-kernel-common-550/noble-updates,noble-security,now 550.120-0ubuntu0.24.04.1 amd64 [installed,automatic]
nvidia-kernel-source-550/noble-updates,noble-security,now 550.120-0ubuntu0.24.04.1 amd64 [installed,automatic]
nvidia-prime/noble,now 0.8.17.2 all [installed,automatic]
nvidia-settings/noble,now 510.47.03-0ubuntu4 amd64 [installed,automatic]
nvidia-utils-550/noble-updates,noble-security,now 550.120-0ubuntu0.24.04.1 amd64 [installed,automatic]
xserver-xorg-video-nvidia-550/noble-updates,noble-security,now 550.120-0ubuntu0.24.04.1 amd64 [installed,automatic]
 
I did with Debian and UEFI.
This is link to my config.
Also you have not limited ahci ( i think its advised to do it ? )
Second i assigned my gpu to ahci. I had to assign to specific ones as when i tested before i needed to find out which ones are taken and which one are used and also i used UEFI as Grub was a big no go, and i disabled my gpu trough boot config so not to load it ( to make sure my gpu was not being used in FreeBSD ) and also i did not passed whole gpu ( i think second part of gpu is audio ??? ) you can see in my config only 129:0: so ignore that part of it :)
 
Ok, it seems that, at least for me, it's necessary to patch also the `sys/amd/vmm/x86.c` with the usual 'kvmkvm' stuff. Once rebuilt the kernel, I was able to access the GPU with Nvidia 570 server drivers (I want to reduce that to headless, as I need the VM for CUDA only):

Code:
root@baraddur:/home/sysadmin# nvidia-smi 
Sun Mar 16 17:16:03 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.86.15              Driver Version: 570.86.15      CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 4080 ...    Off |   00000000:00:02.0 Off |                  N/A |
| 32%   31C    P0             37W /  320W |       1MiB /  16376MiB |      3%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

A simple cuda test is working to, from `https://gist.github.com/f0k/0d6431e3faa60bffc788f8b4daa029b1`:

Code:
root@baraddur:/home/sysadmin# vim cuda_check.c
root@baraddur:/home/sysadmin# nvcc -o cuda_check cuda_check.c -lcuda
root@baraddur:/home/sysadmin# ./cuda_check
Found 1 device(s).
Device: 0
  Name: NVIDIA GeForce RTX 4080 SUPER
  Compute Capability: 8.9
  Multiprocessors: 80
  CUDA Cores: 10240
  Concurrent threads: 122880
  GPU clock: 2550 MHz
  Memory clock: 11501 MHz
  Total Memory: 15954 MiB
  Free Memory: 15700 MiB

Some other test is working too, so next step is to create a proper shaped machine and tryout python frameworks.

Anyway, at this point I'm not really sure if the above infos about using Nvidia drivers without 'kdmkdm' are accurate.
Thank you
 
Hey group, I have myself been looking into this side of FreeBSD for running Tensforflow and Keras. My current config for running GPU acceleration and passthrough is WSL2 on Ubuntu 24.04 LTS on Windows for ML/DL workloads. Afaik WSL2 runs a hypervisor with a Linux kernel. On FreeBSD for hypervisors it looks to be bhyve here. I am interested in the WSL2 equivalent on BSD systems or (this would be awesome) a more robust framework on BSD base systems.

So far what I have found bhyve and Linuxulator are the current features to build out FreeBSD alternatives in ML/DL
 
Hey group, I have myself been looking into this side of FreeBSD for running Tensforflow and Keras. My current config for running GPU acceleration and passthrough is WSL2 on Ubuntu 24.04 LTS on Windows for ML/DL workloads. Afaik WSL2 runs a hypervisor with a Linux kernel. On FreeBSD for hypervisors it looks to be bhyve here. I am interested in the WSL2 equivalent on BSD systems or (this would be awesome) a more robust framework on BSD base systems.

So far what I have found bhyve and Linuxulator are the current features to build out FreeBSD alternatives in ML/DL
Out of topic, friend! By the way, you can use FreeBSD as host machine and run a VM via vm-bhyve to install Ubuntu on it, then your ML/DL workloads could run on Ubuntu.
 
Out of topic, friend! By the way, you can use FreeBSD as host machine and run a VM via vm-bhyve to install Ubuntu on it, then your ML/DL workloads could run on Ubuntu.

I agree. This may be a shift from the original post. I was looking into FreeBSD’s options for ML/DL and exploring GPU passthrough and acceleration. Bhyve appears to be the primary hypervisor on FreeBSD, and I’m curious about how well it supports CPU and GPU passthrough for virtualization.
 
I am tuning the bhyve vm settings currently for gpu passthrough. lets see what happens
 

Attachments

  • gpupassthrough.png
    gpupassthrough.png
    252 KB · Views: 45
I'm using a solution that involves a patch (for 14.2-R) and a few extra commands to work around the device disappearing between guest restarts:


I can see there's been some discussion of the patch earlier in the thread, but perhaps my summary may help someone looking to get things working quickly.
 
Back
Top