I have a ZFS pool consisting of two drives in a mirror VDEV. Now I'm replacing one with a larger capacity. So I first add the 3rd (new) drive to the VDEV for resilvering, before I detach the small drive.
However when I partition the new drive in the same way I partitioned my existing drives, I found something different with the new drive - it leaves "- free -" space at the very beginning of the address space.
Here's the command I ran on the new drive (same way I partitioned my other two drives):
Here's the result of gpart show - notice how address 40 ~ 2008 are left blank before ada2p1 - which is not the case for the other two drives. Also notice how I made the three partitions of ada0 and ada1 have the same start addresses respectively - which is what I would like the new ada3 partitioned the same way.
My question is - is this normal? Why can this happen? Does it affect the ZFS pool's performance or is it considered "misaligned" with the other two drives?
Another question - is there a ZFS command that automatically partition the new drive and mirror the existing drives in a mirror VDEV, so that I don't need to run gpart on the new drive?
However when I partition the new drive in the same way I partitioned my existing drives, I found something different with the new drive - it leaves "- free -" space at the very beginning of the address space.
Here's the command I ran on the new drive (same way I partitioned my other two drives):
Code:
# CREATE GPT PARTITION SCHEME:
gpart create -s gpt ada2
gpart add -t freebsd-boot -a 1m -b 40 -s 512k ada2
gpart add -t freebsd-swap -a 1m -s 2G ada2
gpart add -t freebsd-zfs -a 1m ada2
# WRITES BOOT CODE TO MBR AND TO BOOT PARTITION:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2
Here's the result of gpart show - notice how address 40 ~ 2008 are left blank before ada2p1 - which is not the case for the other two drives. Also notice how I made the three partitions of ada0 and ada1 have the same start addresses respectively - which is what I would like the new ada3 partitioned the same way.
Code:
~ gpart show
=> 40 11721045088 ada0 GPT (5.5T)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 4194304 2 freebsd-swap (2.0G)
4196352 11716847616 3 freebsd-zfs (5.5T)
11721043968 1160 - free - (580K)
=> 40 1953525088 ada1 GPT (932G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 4194304 2 freebsd-swap (2.0G)
4196352 1949327360 3 freebsd-zfs (930G)
1953523712 1416 - free - (708K)
=> 40 7814037088 ada2 GPT (3.6T)
40 2008 - free - (1.0M)
2048 1024 1 freebsd-boot (512K)
3072 1024 - free - (512K)
4096 4194304 2 freebsd-swap (2.0G)
4198400 7809837056 3 freebsd-zfs (3.6T)
7814035456 1672 - free - (836K)
My question is - is this normal? Why can this happen? Does it affect the ZFS pool's performance or is it considered "misaligned" with the other two drives?
Another question - is there a ZFS command that automatically partition the new drive and mirror the existing drives in a mirror VDEV, so that I don't need to run gpart on the new drive?