Solved Relocating zroot to new disks

I want to completely replace my existing zroot which was configured by the FreeBSD installer as a ZFS mirror on ada0p3 and ada1p3.

The new zroot is to be installed as a mirror on ada2p3 and ada3p3.

Adding ada2p3 and ada3p3 as additional mirrors to the zroot pool, and removing ada0p3 and ada1p3, is not an option as the new root partition is significantly smaller than the old one.

I have manually initialised both the new disks identically with gpart(8), establishing the protective MBR (with /boot/pmbr), freebsd-boot (with /boot/gptboot), freebsd-swap (gmirror), and freebsd-zfs partitions in exactly the same way as the FreeBSD installer would. This is identical to the existing root disks, except that p3 is smaller:
Code:
# gpart show /dev/ada2
=>       40  488397088  ada2  GPT  (233G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048   33554432     2  freebsd-swap  (16G)
   33556480  180355072     3  freebsd-zfs  (86G)
  213911552   25165824     4  NOT RELEVANT
  239077376  134217728     5  NOT RELEVANT
  373295104  115101696     6  NOT RELEVANT
  488396800        328        - free -  (164K)
My plan is to zfs-send a snapshot of the zroot to the new root pool:
Code:
zpool create zroot2 mirror /dev/ada2p3 /dev/ada3p3
zfs set compression=lz4 zroot2
zfs snapshot -r zroot@replica1
zfs umount zroot2
zfs send -R zroot@replica1 | zfs receive -Fdu zroot2
The zroot bootfs is currently set as:
Code:
# zpool get all | grep boot
tank   bootfs                         -                              default
zroot  bootfs                         zroot/ROOT/13                  local

# zfs get canmount zroot/ROOT/13
NAME           PROPERTY  VALUE     SOURCE
zroot/ROOT/13  canmount  noauto    local

The old zroot disks will remain in-situ, and I plan to relocate the swap (and fix the new fstab) after the new zroot2 is up and running.

Is this sufficient to boot using zroot2:
Code:
zpool set bootfs=zroot2/ROOT/13 zroot2
shutdown -r now
# reset BIOS boot device to ada2, and boot
But, what is the appropriate mechanism to disable the old zroot hierarchy so it, and its mount points, are no longer visible, and do I need to boot single user to do it? zpool export? zpool offline? swapoff and pull the cables? ...

Thanks.
 
I have now wrestled this beast to a near standstill, but there remains one problem.

The original root is in a pool called zroot. The new root is in a pool called zroot2.

When I boot from the new root disk mirror, I hear the new disks rattling, and see zroot2 is being user in single user mode.

However the old zroot rises like Lazarus, and gets mounted as soon as I go multi-user. The zroot and zroot2 file systems get overlaid on top of each other. I have tried to suppress the old zroot while in single user with:
Code:
zpool import -N -f zroot
export zroot
This does not help. It gets imported when the system boots up anyway.

I'm guessing that this is a problem best managed by manipulating /etc/zfs/zpool.cache, but I can't verify that or find instructions.

This is a production system, and I don't want to take a crap shoot.

So how do I stop the old zroot from being automatically imported, without destroying it, so I have a reversion path?
 
This is a production system, and I don't want to take a crap shoot.
Probably stating the very obvious but have you got a spare machine and four drives to experiment with? Might not be the sort of thing that you have lying around.

Set up the test system (as basically as possible but enough to inspire confidence that it's fairly close to production) the way you had ZFS on the production system and then try the same changes on the test system. Then if it all goes horribly wrong can try something else on the test system and have some confidence that it should work on production. Hopefully.
 
This is a production system, and I don't want to take a crap shoot.
Test and practice in a VM. I mostly use VirtualBox.

I don't know if there is a zpool command or subcommand to edit zpool.cache, at least I couldn't find one, but reading OpenZFS' FAQ a new zpool.cache can be generated setting the cachefile property.

Tested in a VM:
Code:
boot into single-user mode of zroot2

# zfs mount -a
# zfs readonly=off zroot2
# rm /boot/zfs/zpool.cache /etc/zfs/zpool.cache
# zpool set cachefile=/etc/zfs/zpool.cache zroot2
# exit

Old zroot won't be imported automatically (and overlaid mounted).

EDIT: Analyzing /var/log/bsdinstall_log of a guided Root-on-ZFS installation sets the pool cache file to /boot/zfs/zpool.cache .
 
Thanks to those who responded, especially to T-Daemon, for the testing. I do use VMs to workshop major changes, and did in this case. However I was simply unaware of how to manipulate the zpool.cache, and Google didn't help a lot. I have bookmarked OpenZFS' FAQ.

I now have my ZFS server running happily on a brand new root:
Code:
[sherman.145] # zpool status zroot2
  pool: zroot2
 state: ONLINE
config:

    NAME                     STATE     READ WRITE CKSUM
    zroot2                   ONLINE       0     0     0
      mirror-0               ONLINE       0     0     0
        gpt/WX31EA1DX501:p3  ONLINE       0     0     0
        gpt/WXN1E32KUAUF:p3  ONLINE       0     0     0

errors: No known data errors

[sherman.146] # zfs list zroot2
NAME     USED  AVAIL     REFER  MOUNTPOINT
zroot2  34.3G  48.5G       88K  /zroot
For those who get here via Google, I have attached the script I used to relocate the ZFS root. Beware, it's got a carriage return at the end of each line...

Oh, I had to manually import my tank after I re-initialised the zpool.cache.
 

Attachments

Back
Top