I have attached the plan I used to migrate 7x3TB RAIDZ1 -> 1x12TB disk -> four 2x3TB mirrors. Not exactly what you want, but some useful ideas.
My 8 x 3.5" disks are in a vertical stack. They are labeled from the bottom to top L0 through L7.
The level in the stack and the serial number are encoded into the gpt label of each drive -- an invaluable aid when swapping out dead disks (I just did one today that had 61058 hours clocked up).
To minimise down time, each
zfs send
was done twice -- an initial (with all clients running), and an incremental (with the clients quiesced).
This was done quite a while ago. Its best use is as a guide to feel your own way. Caveat emptor.
Not all 3TB disks are exactly the same size, and when you add a replacement "disk" is must be at least as large as the "disk" it replaces. Consider placing a partition on each drive, and using that partition as a GEOM provider to ZFS. Make the partition size the "lowest common denominator" for all the disks you might ever use. My plan does not do this, but I would do it next time.
Today my tank looks like this. The special mirror-5 and log mirror-4 (which use SSD partitions) were added after the attached plan was completed, and some of the disks have been replaced:
Code:
[sherman.131] $ zpool status tank
pool: tank
state: ONLINE
scan: resilvered 1.28T in 02:16:17 with 0 errors on Sat Apr 19 10:53:41 2025
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gpt/L1:ZC1564PG ONLINE 0 0 0
gpt/L6:WMC1T1408153 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gpt/L0:ZC135AE5 ONLINE 0 0 0
gpt/L5:WMC1T2195505 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
gpt/L4:ZC12LHRD ONLINE 0 0 0
gpt/L3:ZC1C68SW ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
gpt/L2:ZC1AKXQM ONLINE 0 0 0
gpt/L7:WE23ZTX9 ONLINE 0 0 0
special
mirror-5 ONLINE 0 0 0
gpt/410008H400VGN:p5 ONLINE 0 0 0
gpt/26B7686E6F2B7:p5 ONLINE 0 0 0
logs
mirror-4 ONLINE 0 0 0
gpt/410008H400VGN:p4 ONLINE 0 0 0
gpt/26B7686E6F2B7:p4 ONLINE 0 0 0
errors: No known data errors