ZFS Add temporary drive to mirror existing raidz1 pool

I have a 4 x 3TB RAIDZ1 array where a drive has failed. Replacement ordered - now waiting.

I have a spare 12TB drive which I want to add as a temporary mirror to preserve the integrity of the pool.

I feel like "zpool add" will add the drive permanently to the array? Additionally, the man page for "zpool attach" specifically says, "The existing device cannot be part of a raidz configuration."

Is there a way to do it?

Thanks.
 
the easiest way is to temporarily replace the failed drive with the 12TB one and replace it again when the spare arrives.
mirroring the whole pool with the big drive is not possible afaik.
or
you can split the large drive in 4 partitions and mirror each 3tb drive with a partition of the big drive but for this you need to retire every drive , mirror it and replace it with the mirror which looks awkward and probably will degrade performance
 
You could replace the failed drive with the large one, as covacat said. And when the replacement arrives, you also add that one to the pool, and after resilvering remove the 12T, again.
Another way was to create an own pool of a single partition on the 12T drive, and then do a mirror of both pools. (The 4x3T raidz1 is smaller than 12T [~8...10?])
But why so complicated?
It's just for to have some temporary, additional security for a couple of days until the replacement arrives, right?
So why not just simply make a bu of the pool instead fumbling with it twice?
I would create a 12T ufs partition on the large drive, and copy (rsync) the zfs pool's content to it.


Edited an hour and a half later.
:rolleyes:
 
I forgot to mention in the OP that I've actually ordered 2 drives, and was considering going with a 5 drive RAIDZ2. So one goal was to be able to potentially transition from 4x RAIDZ1 to 5x RAIDZ2 by temporarily mirroring, rather than having to backup, destroy, create, restore... but that seems unlikely? Large parts of the pool are using zstd-15 for compression, it will require a lot of compute power (and time) to write back to a completely new pool. (Even a simple incremental backup of one server to the array pushes CPU load to 30+)

I actually have two spare 12TB drives (overkill, I know!) so for the moment I'm making a complete backup to a separate file system, then I'll use the second 12TB drive as a temporary replacement for the failed 3TB drive.

[EDIT: it looks like zfs send can bypass the decompress-recompress step by using the -c and -L flags, which makes transferring previously compressed data from one pool to another more feasible.]
 
I have attached the plan I used to migrate 7x3TB RAIDZ1 -> 1x12TB disk -> four 2x3TB mirrors. Not exactly what you want, but some useful ideas.

My 8 x 3.5" disks are in a vertical stack. They are labeled from the bottom to top L0 through L7.

The level in the stack and the serial number are encoded into the gpt label of each drive -- an invaluable aid when swapping out dead disks (I just did one today that had 61058 hours clocked up).

To minimise down time, each zfs send was done twice -- an initial (with all clients running), and an incremental (with the clients quiesced).

This was done quite a while ago. Its best use is as a guide to feel your own way. Caveat emptor.

Not all 3TB disks are exactly the same size, and when you add a replacement "disk" is must be at least as large as the "disk" it replaces. Consider placing a partition on each drive, and using that partition as a GEOM provider to ZFS. Make the partition size the "lowest common denominator" for all the disks you might ever use. My plan does not do this, but I would do it next time.

Today my tank looks like this. The special mirror-5 and log mirror-4 (which use SSD partitions) were added after the attached plan was completed, and some of the disks have been replaced:
Code:
[sherman.131] $ zpool status tank
  pool: tank
 state: ONLINE
  scan: resilvered 1.28T in 02:16:17 with 0 errors on Sat Apr 19 10:53:41 2025
config:

    NAME                      STATE     READ WRITE CKSUM
    tank                      ONLINE       0     0     0
      mirror-0                ONLINE       0     0     0
        gpt/L1:ZC1564PG       ONLINE       0     0     0
        gpt/L6:WMC1T1408153   ONLINE       0     0     0
      mirror-1                ONLINE       0     0     0
        gpt/L0:ZC135AE5       ONLINE       0     0     0
        gpt/L5:WMC1T2195505   ONLINE       0     0     0
      mirror-2                ONLINE       0     0     0
        gpt/L4:ZC12LHRD       ONLINE       0     0     0
        gpt/L3:ZC1C68SW       ONLINE       0     0     0
      mirror-3                ONLINE       0     0     0
        gpt/L2:ZC1AKXQM       ONLINE       0     0     0
        gpt/L7:WE23ZTX9       ONLINE       0     0     0
    special
      mirror-5                ONLINE       0     0     0
        gpt/410008H400VGN:p5  ONLINE       0     0     0
        gpt/26B7686E6F2B7:p5  ONLINE       0     0     0
    logs
      mirror-4                ONLINE       0     0     0
        gpt/410008H400VGN:p4  ONLINE       0     0     0
        gpt/26B7686E6F2B7:p4  ONLINE       0     0     0

errors: No known data errors
 

Attachments

Back
Top