Hi,
I am converting the tank on my ZFS server from 5 spindle RAIDZ1 (da0 - da4) to 7 spindle RAIDZ2 (da0 - da6).
In order to do that I have to copy all the data out of the tank to temporary storage, re-create the tank in RAIDZ2 format, and copy all the data back again.
There's currently about 7.5 TB of data to move (but planning to get that down to 6.5 TB). That's going to take quite a while, and I want to explore any option to speed it up.
The two extra spindles required for the 7 spindle RAIDZ2 pool (da5 and da6) are already installed.
I also have two new 4 TB external USB 3.1 disks (da7 and da8) to use as temporary storage for the data shuffle. They are ultimately destined for off-site backup rotation.
Does anyone have experience with buffering the zfs send/receive processes when they are both running on the same host?
Given that the recordsize for all the ZFS file systems is 128K, would it speed things up if I replace
I am converting the tank on my ZFS server from 5 spindle RAIDZ1 (da0 - da4) to 7 spindle RAIDZ2 (da0 - da6).
In order to do that I have to copy all the data out of the tank to temporary storage, re-create the tank in RAIDZ2 format, and copy all the data back again.
There's currently about 7.5 TB of data to move (but planning to get that down to 6.5 TB). That's going to take quite a while, and I want to explore any option to speed it up.
The two extra spindles required for the 7 spindle RAIDZ2 pool (da5 and da6) are already installed.
I also have two new 4 TB external USB 3.1 disks (da7 and da8) to use as temporary storage for the data shuffle. They are ultimately destined for off-site backup rotation.
Does anyone have experience with buffering the zfs send/receive processes when they are both running on the same host?
Given that the recordsize for all the ZFS file systems is 128K, would it speed things up if I replace
pv
with mbuffer -s 128k -m 1G
below?
Code:
$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank 13.6T 7.60T 6.03T - - 31% 55% 1.00x ONLINE -
zroot 216G 6.70G 209G - - 0% 3% 1.00x ONLINE -
# Set up the temporary storage on two 4 TB USB disks
zpool create tankstore /dev/da7 /dev/da8
# Take the initial snapshot
zfs snapshot -r tank@replica1 # -r Recursively create snapshots of all descendent datasets.
zfs send -R tank@replica1 | pv | zfs receive -dF tankstore # -R moves all properties, snapshots and clones.
# Shut down all client access to the tank, and then
zfs snapshot -r tank@replica2
zfs send -Ri tank@replica1 tank@replica2 | pv | zfs receive -dF tankstore
# Destroy the tank
zpool -f destroy tank
# Re-create the tank as raidz2 with 7 spindles
zpool create tank raidz2 /dev/da0 /dev/da1 /dev/da2 /dev/da3 /dev/da4 /dev/da5 /dev/da6
# Populate the new tank
zfs snapshot -r tankstore@replica1
zfs send -R tankstore@replica1 | pv | zfs receive -dF tank