Yes. I believe you can migrate your data this way, by creating a ZFS pool on top of 4 separate vdevs, each using one of your new drives, giving you about 14.5 TB of usable disk space: do
zpool create [U]pool-name[/U] [U]new-disk1[/U]
, then
zpool add [U]pool-name[/U] [U]new-disk2[/U]; zpool add [U]pool-name[/U] [U]new-disk3[/U]; zpool add [U]pool-name[/U] [U]new-disk4[/U]
.
Now you copy your data to the new pool
and do a scrub to be sure everything is okay.
At this moment you have two copies of your data -- old one, on RAIDZ1, and new one on ZFS "stripe". So far so good.
The risk is, that you'll have to destroy your RAIDZ1 pool in order to move old hard drives to the new pool: failure of new drives will lead to data loss. That's the point of the scrub, to have some confidence, that the cloned data is readable (and hopefully will remain okay in the next few days).
Now you add redundancy to the new pool: first you destroy the existing RAIDZ1 pool, then you upgrade disk vdevs to mirror vdevs:
zpool attach [U]pool-name[/U] [U]new-disk1[/U] [U]old-disk1[/U]; zpool attach [U]pool-name[/U] [U]new-disk2[/U] [U]old-disk2[/U]; ...
, you got the idea.
Do consider the following:
- be sure to compare physical disk sizes (old/new) with
geom disk list
;
- decide whether to use RAW devices or aligned partitions, sized few tens of megabytes less than disk size in case you need to replace physical disks with newer ones, having similar, but not exactly the same size;
- be aware of the risk involved: new disk failure after RAIDZ1 is destroyed will lead to data loss, as might running incorrect command
I would strongly suggest you to play with ZFS a bit, perhaps on a VM or using image files and
mdconfig(8) to test everything in advance, in a smaller scale. 11 TB is a lot of data!
Well, good luck!