Here is an extract of the plan I used to move the root of my ZFS server form pool "zroot" (the default pool name) to the new pool "zroot2". This boots from a pool on new disks, which is the most general case. I have not shown the partitioning required for the new disks, but that's pretty obvious (swap on partition 2, root pool on partition 3). The part you really need is the "zpool set bootfs", but the surrounding context is important.man for gptzfsbool says "The first pool seen during probing is used as a default boot pool."
Is there any way to boot from another pool?
DISK0=ada2 # first disk of new root mirror
DISK1=ada3 # second disk of new root mirror
SWAP=swap2 # new swap GEOM mirror
ZROOTSRC=zroot # source pool (old root)
ZROOTDST=zroot2 # destination pool (new root)
# Create new swap and root
gmirror label -v -b round-robin $SWAP /dev/${DISK0}p2
gmirror insert $SWAP /dev/${DISK1}p2
zpool create $ZROOTDST mirror /dev/${DISK0}p3 /dev/${DISK1}p3
# Copy the old root to the new root.
zfs snapshot -r $ZROOTSRC@replica1
zfs umount $ZROOTDST # you must keep it unmounted
zfs send -R $ZROOTSRC@replica1 | zfs receive -Fdu $ZROOTDST
# The new root is now frozen at the time of the snapshot.
# If this is an issue you need to drop into single user mode
# to execute the snapshot prior to send/receive.
# This is the default bootable dataset for the new root pool.
# It's usually <zroot_pool_name>/ROOT/default.
# But an upgrade using a different boot environment may change that.
# You must get this right, or your system will not boot.
# Run "zpool get bootfs $ZROOTSRC" to see the value for the old root.
# Mine looks like "zroot/ROOT/13". Yours may be "zroot/ROOT/default".
# Keep the "ROOT/13" or "root/default" part and change the pool name
# ZROOTSRC to ZROOTDST (i.e. "zroot" to "zroot2").
zpool set bootfs=$ZROOTDST/ROOT/13 $ZROOTDST
zpool export $ZROOTDST
# Reboot, but interrupt it to re-confgiure the BIOS.
# Edit the BIOS boot order to favour new root mirrors, e.g. ada2, ada3.
# Reset, and allow the system to boot SINGLE USER mode.
# We need to stop the old zroot from being imported and mounted.
# https://openzfs.github.io/openzfs-docs/Project%20and%20Community/\
# FAQ.html#the-etc-zfs-zpool-cache-file
# Execute these commands in single user mode.
zfs set readonly=off $ZROOTDST
rm -f /boot/zfs/zpool.cache /etc/zfs/zpool.cache
zpool set cachefile=/etc/zfs/zpool.cache $ZROOTDST
# Change fstab to use the new swap partition. I'm using a GEOM mirror.
# fstab: /dev/mirror/$SWAP none swap sw 0 0
vi /etc/fstab
For the sake of completeness, here is the code to partition a fresh set of two SSDs for a new ZFS root mirror:I have not shown the partitioning required for the new disks, but that's pretty obvious (swap on partition 2, root pool on partition 3).
# create the partition tables
gpart destroy -F ${DISK0}
gpart destroy -F ${DISK1}
gpart create -s GPT ${DISK0}
gpart create -s GPT ${DISK1}
# Create a 512 kB boot partition at offset 40 -- which is the size of the
# FAT_32 "reserved sectors" (32 or 34 blocks) rounded up to 4 kB boundary.
# This is the same layout used by the FreeBSD 13 intaller.
gpart add -i 1 -b 40 -s 512k -t freebsd-boot ${DISK0}
gpart add -i 1 -b 40 -s 512k -t freebsd-boot ${DISK1}
# Install the first and second stage bootloaders for a ZFS root
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ${DISK0}
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ${DISK1}
# Allign all subsequent partitions on a 1 MiB boundary
gpart add -a 1m -i 2 -s 16g -t freebsd-swap ${DISK0} # swap (GEOM mirror)
gpart add -a 1m -i 2 -s 16g -t freebsd-swap ${DISK1}
gpart add -a 1m -i 3 -s 86g -t freebsd-zfs ${DISK0} # root (ZFS mirror)
gpart add -a 1m -i 3 -s 86g -t freebsd-zfs ${DISK1}
gpart add -a 1m -i 4 -s 12g -t freebsd-zfs ${DISK0} # ZIL (ZFS mirror)
gpart add -a 1m -i 4 -s 12g -t freebsd-zfs ${DISK1}
gpart add -a 1m -i 5 -s 64g -t freebsd-zfs ${DISK0} # L2ARC (ZFS stripe)
gpart add -a 1m -i 5 -s 64g -t freebsd-zfs ${DISK1}
# The rest is TRIM'd (newfs -E) and unused (overprovisioning on the SSD)
gpart add -a 1m -i 6 -t freebsd-ufs ${DISK0} # unused
gpart add -a 1m -i 6 -t freebsd-ufs ${DISK1}
zfs:xxx/ROOT/default:
My crossflashed H710P does not support selecting a drive to boot. It just tries all the disks one by one.Edit the BIOS boot order to favour new root mirrors, e.g. ada2, ada3.
I was just interested in this method, but the man about it does not say too much detail. It looks like the most painless solution, I'll give it a try, thanks.To configure a new default ZFS boot pool, create in the first pools file system gptzfsboot(8) detects (the current default boot pool), file /boot.config and set
To exclude a unused zfs pool (or pools) from boot permanently, its zpoolprops(7) "bootfs" can be unset.
I considered that option as well, but that doesn't work. The boot process stops at the BTX "boot" loader prompt.I thought that in order to exclude an unnecessary pool from booting, I could change its partition type with "gpart modify" from freebsd-zfs to something else. The man on gptzfsboot says that it only tries freebsd-zfs partitions.
What do you think about this?
I believe that sentence is misphrased. If there are multiple ZFS pools, gptzfsboot(7) is booting the first pool it finds from a freebsd-zfs partition with the ZFS pool property "bootfs" set.Man gptzfsboot says: "If the bootfs property is not set, then the root filesystem of the pool is used as the default."
The reviews summary saysIf you feel comfortable rebuilding and reinstalling gptzfsboot, then you can try a patch from here https://reviews.freebsd.org/D33302
zfs boot: prefer a pool with bootfs set for a boot pool
In other words, ignore pools without bootfs property when considering
candidates for a bool pool. Use a pool without bootfs only if none of
discovered pools has it.
This should allow to easily designate a boot pool on a system with
multiple pools without having to shuffle disks, change firmware boot
order, etc.
This diff is applicable to any particular version of the source code, right? How do I know that I'm applying it to the correct one?If you feel comfortable rebuilding and reinstalling gptzfsboot, then you can try a patch from here https://reviews.freebsd.org/D33302
This diff is applicable to any particular version of the source code, right? How do I know that I'm applying it to the correct one?
If so, that solves all the problems. I will try it now on the VM, thank youIf a pool has "bootfs" unset, the next pool in line having it set is booted. I verified by testing in a VM.
The reviews summary says
I speak here for 13.1-RELEASE, isn't that the case already, except "Use a pool without bootfs only if none of discovered pools has it.". See explanation post #7.
I'll check again and get back.T-Daemon I believe that you had some problem in your testing. gptzfsboot manual has the correct wording. bootfs property does not play any role in selecting from which pool to boot, bootfs only plays role in selecting from which filesystem to boot on the chosen pool.
You are right. I can't reproduce it anymore.I believe that you had some problem in your testing.
# zpool import -R /mnt zroot0
# zpool set bootfs= zroot0
# zpool get bootfs
NAME PROPERTY VALUE SOURCE
zroot0 bootfs - default
zroot1 bootfs zroot1/ROOT/default local
BIOS drive C: is disk0
BIOS drive D: is disk1
Can't find /boot/zfsloader
Can't find /boot/loader
Can't find /boot/kernel/kernel
FreeBSD/x86 boot
Default: zfs:zroot0:/boot/kernel/kernel
boot:
Can't find /boot/kernel/kernel
FreeBSD/x86 boot
Default: zfs:zroot0:/boot/kernel/kernel
boot:
# diff 13.1-RELEASE/stand/libsa/zfs/zfsimpl.c 13.1-RELEASE-bootfs/stand/libsa/zfs/zfsimpl.c
# diff 13.1-RELEASE/sys/cddl/boot/zfs/zfsimpl.h 13.1-RELEASE-bootfs/sys/cddl/boot/zfs/zfsimpl.h
1871a1872
> uint64_t spa_bootfs; /* bootfs object id */
bootme
and bootonce
attributes: bootme Attempt to boot from this partition. If more than one
partition has the bootme attribute set, gptboot will
attempt to boot each one until successful.
bootonce Attempt to boot from this partition only one time.
Setting this attribute with gpart(8) automatically also
sets the bootme attribute. Multiple partitions may have
the bootonce and bootme attributes set.
After applying the patch on 13.1-RELEASE and building world, I didn't install gptzfsboot into a existing boot partition manually.Did you actually install it into the boot partition?
On both.Do you have boot partitions on both disks or just on one of them?
=> 40 4194224 ada0 GPT (2.0G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 409600 2 freebsd-swap (200M)
411648 3780608 3 freebsd-zfs (1.8G)
4192256 2008 - free - (1.0M)
=> 40 4194224 ada1 GPT (2.0G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 409600 2 freebsd-swap (200M)
411648 3780608 3 freebsd-zfs (1.8G)
4192256 2008 - free - (1.0M)
@@ -3488,6 +3498,22 @@
...
+ printf("ZFS: failed to read pool %s properties object\n",
Of course! I had a misconception there.Also, I think that EFI boot does not use gptzfsboot. I think that it uses efiloader that's installed in the efi / msdos filesystem.
Just in case it's still of interest.T-Daemon could you please check if both disks are visible to the boot blocks?
Thank you for your effort, now it works as expected.I have update the patch in the review.
That my not be necessary. This is what I observed on UEFI:Also, I looked at the EFI boot code and it seems like it tries only partitions on the boot disk (the disk on which the ESP / EFI partition was used for booting). It does not consider other disks. I may try to work on that code too, but no promises.