ZFS Move 8 SSDs to a new server

I have 8 SSD Kingston DC600M 960GB drives in a Dell R730 server witch I want to move to a new server without reinstall everything.
Both is legacy boot and same setup (more or less as one is newer). No raid controller as I use the internal SATA miniSAS ports.

FreeBSD 14.2 is configured with ZFS like:
Code:
$ zpool status
  pool: zroot
 state: ONLINE
  scan: scrub repaired 0B in 00:16:23 with 0 errors on Wed Jan  1 15:40:44 2025
config:

    NAME        STATE     READ WRITE CKSUM
    zroot       ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        ada0p3  ONLINE       0     0     0
        ada1p3  ONLINE       0     0     0
      mirror-1  ONLINE       0     0     0
        ada2p3  ONLINE       0     0     0
        ada3p3  ONLINE       0     0     0
      mirror-2  ONLINE       0     0     0
        ada4p3  ONLINE       0     0     0
        ada5p3  ONLINE       0     0     0
      mirror-3  ONLINE       0     0     0
        ada6p3  ONLINE       0     0     0
        ada7p3  ONLINE       0     0     0

errors: No known data errors

Can I just move the SSDs to the new server? Or will I destroy everything?
I think it will work, but I ask just in case. I have moved smaller setup, but not 8 disks.

It’s VM Bhyve with a lot of VMs and I don’t want to reinstall/move them etc.
 
Yes, it will work.
Is there any reason to be concerned about the device names, like ada[0-7] if devices are not plugged into corresponding ports in the new server?
I know moving things has worked for me in the past but I tend to use gpt labels.
Since he has mirrors he can take advantage of zpool detach, wither, zpool attach with labels, let resilver, repeat on other device in the mirror then repeat across all mirrors.
 
Since you have the pool mirrored, if you want you can disconnect one device per mirror. And include only 4 devices on the new server. In case you want to keep the "old" one running and check that the new one works correctly:

zpool offline zroot ada1p3 ada3p3 ....

Is there any reason to be concerned about the device names, like ada[0-7] if devices are not plugged into corresponding ports in the new server?
I know moving things has worked for me in the past but I tend to use gpt labels.
Since he has mirrors he can take advantage of zpool detach, wither, zpool attach with labels, let resilver, repeat on other device in the mirror then repeat across all mirrors.

You shouldn't have any problems with the device names, regarding your question about the order of the devices it's not a problem either, it would only be a problem if you have a mirror for zroot and other devices that were another pool, then you would have to configure the HBA to point to the bays where you have the mirror so that it boots in them.
 
Is there any reason to be concerned about the device names, like ada[0-7] if devices are not plugged into corresponding ports in the new server?
I know moving things has worked for me in the past but I tend to use gpt labels.
Since he has mirrors he can take advantage of zpool detach, wither, zpool attach with labels, let resilver, repeat on other device in the mirror then repeat across all mirrors.

Since this is all ZFS I think the only thing the OP has to get right is to put the disk with the bootloader first.
 
  • Like
Reactions: mer
I am interested whether ZFS will automatically recognize each disk as part of array on the new hardware or they must be connected on the same port numbers?
 
Since this is all ZFS I think the only thing the OP has to get right is to put the disk with the bootloader first.
Thanks. I know it's not always possible but this is why I've always tried to separate my data (home directories, et al) from the OS. It's made migrating easier.

In the OP situation it sounds like on the new server making it recognize the bootable devices is the key, so maybe needing a stop in the BIOS to touch them.
 
Thanks!
Ok, I have two offer on the table here. :)
1. Move all disk.
2. Disconnect 4 drives. Reconnect them later.

I think I will go for nr 1.
Nr. 2 is very smart! I didn’t think about that. But I don’t know if I dare to do that on this setup. But I will definitely setup a test server and do a lot of tests on this on.

Iam not an expert with ZFS. About this bootloader, how can I see witch of them is that?

Code:
# glabel status
                  Name  Status  Components
          gpt/gptboot0     N/A  ada0p1
          gpt/gptboot1     N/A  ada1p1
          gpt/gptboot2     N/A  ada2p1
          gpt/gptboot3     N/A  ada3p1
          gpt/gptboot4     N/A  ada4p1
          gpt/gptboot5     N/A  ada5p1
          gpt/gptboot6     N/A  ada6p1
          gpt/gptboot7     N/A  ada7p1
ufsid/66cc9e5c7365d88f     N/A  md0
ufsid/66cc9ecf3d1c203e     N/A  md1

All disk have:
40 1024 1 freebsd-boot (512K) in # gpart show
type: freebsd-boot in # gpart list

All disk is identical.
 
I'm not sure if one can tell after boot, but if you reboot the current server, stop in BIOS, you should be able to figure out which devices are the actual boot devices.
Looking at your first post, I think you have a "stripe of mirrors", I'm guessing BIOS would say ada0 and ada1 are possible boot devices.
 
Ok, I was thinking about the Bios. Will check on a test server fist as this is in production.

But.. this made a new question.
If this bootloader disk(s) is so vulnerable, what if this disk burns up right now? And no other disk have that information. Can you put in a new disk and that will restore this bootloader?

Is this not a week point in ZFS? Or do I miss something here?
As I sad.. I am not a ZFS expert. ;)
 
Ok, I was thinking about the Bios. Will check on a test server fist as this is in production.

But.. this made a new question.
If this bootloader disk(s) is so vulnerable, what if this disk burns up right now? And no other disk have that information. Can you put in a new disk and that will restore this bootloader?

Is this not a week point in ZFS? Or do I miss something here?
As I sad.. I am not a ZFS expert. ;)

Well, the BIOS dictates that you cannot have raid1 for the bootloader. But you can have the same bootloader at the beginning of all disks. You just have to remember to upgrade all of them instead of one when it's update time.
 
Bootloader is not specifically "ZFS". As nxjoseph points out mirroring boot partitions is a way around this. But boot partitions are typically before ZFS. Your first post has the ZFS vdev on adaNp3, then your post in #11 shows a partition of type freebsd-boot on adaNp1. That means every one of those are a potential bootable partition, but they all would need to be set up correctly. Which is what cracauer@ and Eric A. Borisch are talking about before I hit "enter" on this post.
 
Ok.. now I am on track.
As I don’t have raid/HBA controller in the servers (use the internal SATA miniSAS controller), i can’t choose any raid setup in Bios on this miniSAS. It’s just “Boot from Hard Disk” in legacy mode, that’s why I was confused.

Well.. Think I got all answers.
I will plan a swap in the near future. Will do an update later about how it went.
Thanks all!
 
Back
Top