Upgrading ZFS/zpool under FreeBSD 14?

Hello there. Just upgraded from 13.2 to 14.0 RELEASE, via freebsd-update tool.

I guess I manually need to execute command for ZFS update/upgrade, after FreeBSD upgrade from 13.2 to 14, right?
I'd like to be careful on this step because this is a remote dedicated server with no physical access and no IPMI/KVM neither..
And it seems the server doesn't run on UEFI?

sysctl machdep.bootmethod reports (no UEFI?):
machdep.bootmethod: BIOS

So, after reading forum posts here, it seems the command for that, is:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0. "ada0" being an example disk name, how can I check mine?
(I have two NVMe disks in one ZFS-stripe)

"zpool status" lists:

Bash:
[FreeBSD-root@x:~]# zpool status zroot
  pool: zroot
 state: ONLINE
  scan: scrub repaired 0B in 00:04:12 with 0 errors on Fri Nov 17 03:18:57 2023
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          nvd0p3    ONLINE       0     0     0
          nvd1p3    ONLINE       0     0     0


And the output of geom -t:
Bash:
[FreeBSD-root@x:~]# geom -t
Geom               Class      Provider
nvd0               DISK       nvd0
  nvd0             PART       nvd0p1
    nvd0p1         LABEL      gpt/gptboot0
      gpt/gptboot0 DEV
    nvd0p1         DEV
  nvd0             PART       nvd0p2
    swap           SWAP
    nvd0p2         DEV
  nvd0             PART       nvd0p3
    nvd0p3         DEV
    zfs::vdev      ZFS::VDEV
  nvd0             DEV
nvd1               DISK       nvd1
  nvd1             PART       nvd1p1
    nvd1p1         LABEL      gpt/gptboot1
      gpt/gptboot1 DEV
    nvd1p1         DEV
  nvd1             PART       nvd1p2
    swap           SWAP
    nvd1p2         DEV
  nvd1             PART       nvd1p3
    nvd1p3         DEV
    zfs::vdev      ZFS::VDEV
  nvd1             DEV

Thanks a lot!
 
I guess I manually need to execute command for ZFS update/upgrade, after FreeBSD upgrade from 13.2 to 14, right?
Correct.
And it seems the server doesn't run on UEFI?

sysctl machdep.bootmethod reports:
machdep.bootmethod: BIOS
Correct. It's telling you it used CSM aka the traditional BIOS boot.

So, after reading forum posts here, it seems the command for that, is:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0. "ada0" being an example disk name, how can I check mine?
gpart show. Look for the freebsd-boot partition. That's the index you need to use. Update the bootcode on every disk that has a freebsd-boot partition.

Code:
# gpart show
=>      40  41942960  vtbd0  GPT  (20G)
        40    532480      1  efi  (260M)
    532520      1024      2  freebsd-boot  (512K)
    533544       984         - free -  (492K)
    534528   8388608      3  freebsd-swap  (4.0G)
   8923136  33017856      4  freebsd-zfs  (16G)
  41940992      2008         - free -  (1.0M)
In my case there's a efi and a freebsd-boot. I need to use index 2. Definitely verify what index your system uses because it doesn't necessarily have to be index 1.
 
SirDice thank you very much for your ALWAYS-useful replies!

"gpart show" shows;

Bash:
[FreeBSD-root@x:~]# gpart show
=>        40  2000409184  nvd0  GPT  (954G)
          40        1024     1  freebsd-boot  (512K)
        1064         984        - free -  (492K)
        2048    67108864     2  freebsd-swap  (32G)
    67110912  1933297664     3  freebsd-zfs  (922G)
  2000408576         648        - free -  (324K)

=>        40  2000409184  nvd1  GPT  (954G)
          40        1024     1  freebsd-boot  (512K)
        1064         984        - free -  (492K)
        2048    67108864     2  freebsd-swap  (32G)
    67110912  1933297664     3  freebsd-zfs  (922G)
  2000408576         648        - free -  (324K)

So I guess, now, the commands for me are;

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 nvd0
and
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 nvd1

Right?

My last question; before I post here, I read steps like these as well;

cp /boot/loader.efi /boot/efi/efi/boot/bootx64.efi
and also:
cp /boot/loader.efi /boot/efi/efi/freebsd/loader.efi

Do I need to execute them, too?

Much regards.
 
Spot on ?

Do I need to execute them, too?
No, those are only needed if you have an efi boot partition. In my example I've set up both efi and freebsd-boot, so I need to update them both (I can switch back and forth between UEFI and BIOS boots). Your output doesn't show an efi partition, so there's nothing to update there.
 
My last question; before I post here, I read steps like these as well;

cp /boot/loader.efi /boot/efi/efi/boot/bootx64.efi
and also:
cp /boot/loader.efi /boot/efi/efi/freebsd/loader.efi

Do I need to execute them, too?
No. These are needed only for UEFI boot.
As SirDice already noted, and also as you don't have EFI partition, these steps are unneeded for you. Proceed only gpart method.
 
Since this one was about a ZFS stripe, what about a ZFS mirror?
I have a server running two disks in a mirror and I updated adaop1 with the new loader.efi, but when I try to mount ada1p1 I get a "invalid argument".

ada0p3 and ada1p3 are the partitions in the ZFS mirror, so I assume I would need to update ada1p1 as well with the new loader.efi.
But how?
 
but when I try to mount ada1p1 I get a "invalid argument".
How do you try to mount? Are you sure that
  • The mount point you tried to mount is mounting nothing when you attempted to mount? For example, didn't you try to mount it at /boot/efi when there is ada0p1 mounted?
  • Are you sure you attempted to mount it with mount_msdosfs or mount -t msdosfs? ESP must be formatted as FAT32 or FAT16 (some motherboard additionally support FAT12, though).
And just my opinion, but all physical drives in RAID (including mirror) should have ESP (if not Root on ZFS, FreeBSD root partition, too).
Without them, how do you boot when the first physical drive died and need to boot from second or after physical drive?
Moving still-living physical drive to different bay / interface can cause problem on RAID. (Possibly moved physical drive is considered as failed, too. For 2 drives mirror or RAID5 [single parity], it should be fatal.)
 
Since this one was about a ZFS stripe, what about a ZFS mirror?
I have a server running two disks in a mirror and I updated adaop1 with the new loader.efi, but when I try to mount ada1p1 I get a "invalid argument".

ada0p3 and ada1p3 are the partitions in the ZFS mirror, so I assume I would need to update ada1p1 as well with the new loader.efi.
But how?
I think ada1p1 is simply not formatted. It's a known bug or a more or less wanted limitation of the installer (don't know if it persists yet). It does nothing on the second disk but to create the partition.

You need to format ada1p1 with msdosfs and create the appropriate paths. Then, you'll can copy the loader at both places, exactly as you did for ada0p1.
 
Wouldn't that be a bit risky if both disks are in RAID (Well, if the 2nd disk is, at all) ?
Basically, UEFI firmwares cannot understand the volumes / pools of software RAID like RAIDZ. So ESP should be outside of them, thus, no risk unless specific UEFI firmware supports them but it's unusual.
ESP can be mirrord / be in RAID if it is handled by a kind of hardware RAID controllers. In this case, if working properly, single RAID volume is mimiced as a single physical disk for UEFI firmware and OS'es transparently. So no "second disk" can be seen, thus no problem.
 
Back
Top