I followed this:
------
rc.conf
vm_enable="YES"
vm_dir="zfs:scrap/vm"
-----
Installed:
grub-bhyve, vm-bhyve, qemu-utils, bhyve-firmware
----
Thus far I've tried Alpine, Arch, Ubuntu, FreeBSD. The only one that ever worked was FreeBSD. I tried without zvol, and it seems like this makes no difference.
I simply stopped trying to do the zvol route to eliminate that variable, and I'm just following the templates within .templates in my /scrap/vm datastore.
I have multiple issues with Bhyve too.
I can't even delete the VM's I've created.
But starting with a new one:
It follows it into the console and immediate kernel panic.
Not to mention that my SSH session is completely worthless now. No way can I exit out of it. I have to actually close the terminal and re-SSH into the host.
I click Enter.
I also like the fact that even though I forcefully power them off:
And if you think you can just kill the process, think again.
What about rebooting?
Great, can I delete them now? No, they're locked. I can only delete ubuntu1 for some reason.
Also, go ahead and try and rm -rf those files, even as root, and you will not be able to.
The only way to delete the VM's that are locked is to destroy the dataset that was created for them.
So at least that works, I guess.
Let's try an image
And this is when I get to a broken cloud-init configuration. Something I don't really want to troubleshoot.
I don't use TrueNAS, I run FreeBSD 13.0-RELEASE-p4.
But I read this thread and it amusing and sad, that they had to resort to literally DD'ing images over to boot in bhyve.
There has to be a better way, something I'm missing. I'm not sure what.
GitHub - churchers/vm-bhyve: Shell based, minimal dependency bhyve manager
Shell based, minimal dependency bhyve manager. Contribute to churchers/vm-bhyve development by creating an account on GitHub.
github.com
rc.conf
vm_enable="YES"
vm_dir="zfs:scrap/vm"
-----
Installed:
grub-bhyve, vm-bhyve, qemu-utils, bhyve-firmware
----
Bash:
DATASTORE FILENAME
default alpine-standard-3.15.0-x86_64.iso
default archlinux-2021.12.01-x86_64.iso
default ubuntu-20.04.3-live-server-amd64.iso
Thus far I've tried Alpine, Arch, Ubuntu, FreeBSD. The only one that ever worked was FreeBSD. I tried without zvol, and it seems like this makes no difference.
I simply stopped trying to do the zvol route to eliminate that variable, and I'm just following the templates within .templates in my /scrap/vm datastore.
I have multiple issues with Bhyve too.
Bash:
vm list
NAME DATASTORE LOADER CPU MEMORY VNC AUTOSTART STATE
alpine default grub 1 512M - No Locked (bsd)
arch default grub 1 512M - No Locked (bsd)
vm destroy alpine
/usr/local/sbin/vm: WARNING: alpine appears to be running on bsd (locked)
vm destroy arch
/usr/local/sbin/vm: WARNING: arch appears to be running on bsd (locked)
vm poweroff arch
/usr/local/sbin/vm: ERROR: arch doesn't appear to be a running virtual machine
vm poweroff alpine
/usr/local/sbin/vm: ERROR: alpine doesn't appear to be a running virtual machine
I can't even delete the VM's I've created.
But starting with a new one:
Bash:
vm create -t ubuntu -s 50G ubuntu1
vm install -f ubuntu1 ubuntu-20.04.3-live-server-amd64.iso
It follows it into the console and immediate kernel panic.
Bash:
[ 0.929073] Initramfs unpacking failed: write error
[ 1.076609] Failed to execute /init (error -2)
[ 1.077277] Kernel panic - not syncing: No working init found. Try passing init= option to kernel. See Linux Documentation/admin-guide/init.rst for guidance.
[ 1.079244] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.4.0-81-generic #91-Ubuntu
[ 1.080272] Hardware name: FreeBSD BHYVE, BIOS 13.0 11/10/2020
[ 1.081076] Call Trace:
[ 1.081434] dump_stack+0x6d/0x8b
[ 1.081902] ? rest_init+0x30/0xb0
[ 1.082385] panic+0x101/0x2e3
[ 1.082816] ? do_execve+0x25/0x30
[ 1.083296] ? rest_init+0xb0/0xb0
[ 1.083774] kernel_init+0x100/0x110
[ 1.084276] ret_from_fork+0x35/0x40
[ 1.084876] Kernel Offset: 0x2e800000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
[ 1.086349] ---[ end Kernel panic - not syncing: No working init found. Try passing init= option to kernel. See Linux Documentation/admin-guide/init.rst for guidance. ]---
Not to mention that my SSH session is completely worthless now. No way can I exit out of it. I have to actually close the terminal and re-SSH into the host.
Bash:
vm create -t alpine alpine1
vm install -f alpine1 alpine-standard-3.15.0-x86_64.iso
I click Enter.
Bash:
error: file `/boot/vmlinuz-vanilla' not found.
error: you need to load the kernel first.
Press any key to continue...
Bash:
vm create -t arch arch1
vm install -f arch1 archlinux-2021.12.01-x86_64.iso
Bash:
Booting `arch1 (bhyve install)'
error: file `/arch/boot/x86_64/vmlinuz' not found.
error: you need to load the kernel first.
Press any key to continue...
I also like the fact that even though I forcefully power them off:
Code:
vm poweroff alpine1
Are you sure you want to forcefully poweroff this virtual machine (y/n)? y
vm poweroff arch1
Are you sure you want to forcefully poweroff this virtual machine (y/n)? y
vm poweroff ubuntu1
Are you sure you want to forcefully poweroff this virtual machine (y/n)? y
Bash:
NAME DATASTORE LOADER CPU MEMORY VNC AUTOSTART STATE
alpine default grub 1 512M - No Locked (bsd)
alpine1 default grub 1 512M - No Bootloader (2771)
arch default grub 1 512M - No Locked (bsd)
arch1 default grub 1 512M - No Locked (bsd)
ubuntu1 default grub 1 512M - No Stopped
And if you think you can just kill the process, think again.
Bash:
ps aux | grep bhyve
root 2771 0.0 0.0 546236 10032 1- S 02:07 0:00.02 /usr/local/sbin/grub-bhyve -c /dev/nmdm-alpine1.1A -m /scrap/vm/alpine1/device.map -M 512M -r host -d /scrap/vm/alpi
pkill 2771
ps aux | grep bhyve
root 2771 0.0 0.0 546236 10032 1- S 02:07 0:00.02 /usr/local/sbin/grub-bhyve -c /dev/nmdm-alpine1.1A -m /scrap/vm/alpine1/device.map -M 512M -r host -d /scrap/vm/alpi
What about rebooting?
Bash:
NAME DATASTORE LOADER CPU MEMORY VNC AUTOSTART STATE
alpine default grub 1 512M - No Locked (bsd)
alpine1 default grub 1 512M - No Locked (bsd)
arch default grub 1 512M - No Locked (bsd)
arch1 default grub 1 512M - No Locked (bsd)
ubuntu1 default grub 1 512M - No Stopped
Great, can I delete them now? No, they're locked. I can only delete ubuntu1 for some reason.
Also, go ahead and try and rm -rf those files, even as root, and you will not be able to.
Bash:
vm destroy alpine
/usr/local/sbin/vm: WARNING: alpine appears to be running on bsd (locked)
rm -rf /scrap/vm/alpine
rm: /scrap/vm/alpine: Device busy
The only way to delete the VM's that are locked is to destroy the dataset that was created for them.
So at least that works, I guess.
Let's try an image
Bash:
vm img https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img
vm create -t ubuntu -i xenial-server-cloudimg-amd64-disk1.img ubuntu123
vm start ubuntu123
Starting ubuntu123
* found guest in /scrap/vm/ubuntu123
* booting...
And this is when I get to a broken cloud-init configuration. Something I don't really want to troubleshoot.
I don't use TrueNAS, I run FreeBSD 13.0-RELEASE-p4.
But I read this thread and it amusing and sad, that they had to resort to literally DD'ing images over to boot in bhyve.
There has to be a better way, something I'm missing. I'm not sure what.