Hi all!
This week I tried to upgrade to 14.0-RELEASE from 13.2p5 but I failed. I ran into an zfs error while running freebsd-update upgrade and merging all the config changes.
I have many datasets that are used as disks for vms.
The upgrade fails with the following error:
Then I tried to delete that old dataset:
Here's a list of some datasets:
Any ideas why beadm might be falling to create the new be?
This week I tried to upgrade to 14.0-RELEASE from 13.2p5 but I failed. I ran into an zfs error while running freebsd-update upgrade and merging all the config changes.
I have many datasets that are used as disks for vms.
The upgrade fails with the following error:
Code:
.............
# RISC-V HTIF console [451/1499]
-rcons "/usr/libexec/getty std.9600" vt100 onifconsole secure
+rcons "/usr/libexec/getty std.115200" vt100 onifconsole secure
Does this look reasonable (y/n)? y
To install the downloaded upgrades, run "/usr/sbin/freebsd-update install".
[freebsd:~ $]> sudo freebsd-update install
Password:
src component not installed, skipped
Creating snapshot of existing boot environment... cannot create 'zroot/ROOT/13.2-RELEASE-p5_2023-11-28_090222/cbsd-cloud-openbsd-70.raw': 'canmount' does not apply to datasets of this typeerror when calling zfs_clone() to create boot env
error when calling zfs_clone() to create boot env
error when calling zfs_clone() to create boot env
Failed to create bootenv 13.2-RELEASE-p5_2023-11-28_090222
failed.
Code:
sudo zfs destroy -r zroot/ROOT/default/cbsd-cloud-openbsd-70.raw
[freebsd:~ $]> sudo freebsd-update install
src component not installed, skipped
Creating snapshot of existing boot environment... cannot create 'zroot/ROOT/13.2-RELEASE-p5_2023-11-28_090341/bcbsd-ubuntusrv1-dsk1.vhd': 'canmount' does not apply to datasets of this typeerror when calling zfs_clone() to create boot env
error when calling zfs_clone() to create boot env
error when calling zfs_clone() to create boot env
Failed to create bootenv 13.2-RELEASE-p5_2023-11-28_090341
failed.
Here's a list of some datasets:
Code:
zfs list -r -o name,canmount,mountpoint
NAME CANMOUNT MOUNTPOINT
zroot on /zroot
zroot/ROOT on none
zroot/ROOT/13.2-RELEASE-p1_2023-08-06_180926 noauto /
zroot/ROOT/13.2-RELEASE-p2_2023-09-10_235747 noauto /
zroot/ROOT/13.2-RELEASE-p3_2023-10-05_123918 noauto /
zroot/ROOT/13.2-RELEASE-p4_2023-11-08_124951 noauto /
zroot/ROOT/13.2-RELEASE-p5_2023-11-28_090222 noauto /
zroot/ROOT/13.2-RELEASE-p5_2023-11-28_090341 noauto /
zroot/ROOT/13.2-RELEASE-p5_2023-11-28_090341/adc1 noauto /usr/jails/jails-data/adc1-data
zroot/ROOT/13.2-RELEASE-p5_2023-11-28_094647 noauto /
zroot/ROOT/13.2-RELEASE-p5_2023-11-28_094647/adc1 noauto /usr/jails/jails-data/adc1-data
zroot/ROOT/13.2-RELEASE_2023-04-19_142632 noauto /
zroot/ROOT/13.2-RELEASE_2023-06-22_164022 noauto /
zroot/ROOT/default noauto /
zroot/ROOT/default/adc1 on /usr/jails/jails-data/adc1-data
zroot/ROOT/default/bcbsd-ubuntusrv1-dsk1.vhd - -
zroot/ROOT/default/cbsd-cloud-cloud-Ubuntu-x86-20.04.2.raw - -
Any ideas why beadm might be falling to create the new be?