Solved Creating a mirror zpool, good practice

Hello,

Quick question about the creation of a zfs mirror pool on the 2 new disks I just received.
I'm asking because I have read opposite opinions :

I just got two WD Ultrastar 8To disks.
Should I use the whole disks and create my pool with :
zpool create mypool mirror /dev/ada1 /dev/ada2

Or should I create GPT partition tables on each disk, with a ZFS slice on each disk and then create a pool with those 2 ZFS slices ?

I don't plan to replace those disks with something else than the same model, in case of failure.

I tend to think that using the whole disks with ZFS could be more simple.

Thanks.
 
There are indeed different opinions.

Use full disks is simpler (well, just a very little), but as you may guess there is a drawback (at least).
Most softwares (I mean disk related ones) do not understand ZFS, but all know what a GPT partition is.
Create a partition (and not a slice, beware of the words you use) is a means to protect your pool from softwares intrusion.
 
  • Like
Reactions: mer
I don't plan to replace those disks with something else than the same model, in case of failure.
The WD Ultrastar 8TB has a 5 year limited warranty, assumed the disks you have last longer than the warranty, or even they fail during warranty, you can't be sure the same model is available, or even if available, has the same size.

A slightly negative difference in size would refuse the pool to accept the disk, unless it can be forced.

IMO, the best practice is to create a GPT partition to make sure the partition size is at least the same size. If the size is larger it wouldn't be a problem.
 
So, in case of failure of 1 disk, if I replace it with a 12 GB, it will be fine if I use the whole disks.
But then, if I want to replace the other 8GB disk with a 12 GB, will I be able to expand the pool to use 12 GB ?
 
The size of a mirror is always the size of the smallest partition, so:
8T + 8T = 8T, your start situation
8T + 12T = 8T, when you replace one with a 12T drive
12T + 12T = 12T , when you replace the second 8T with another 12T

I'd recommend to create one GPT partition for each drive over the full size, with partition labels. Then create the pool over the lables, not the physical partitions, nor the drives.
This is slightly a bit more complicated (you need to read 10 minutes about it, and have two additional commands more in the shell for creating two partitions, duh!) but it will pay off when you need to replace or add drives!
 
I learned it the hard way that GPT and labels are the way to go.
Another benefit you get is that you can clearly label the physical disk to match your GPT label so it's easy to identify the disk(s).
 
DrC3P1 , you didn't say so explicitly but it looks like you do not plan to boot from those disks?
Yep, i do not plan to boot from those disk, its only storage.
I got it this morning, and made this :
Code:
root@nuc:/storage/data # gpart show da0 && gpart show da1
=>         40  15628053088  da0  GPT  (7.3T)
           40  15623782400    1  freebsd-zfs  (7.3T)
  15623782440      4270688       - free -  (2.0G)

=>         40  15628053088  da1  GPT  (7.3T)
           40  15623782400    1  freebsd-zfs  (7.3T)
  15623782440      4270688       - free -  (2.0G)
I let 2GB at the end.

Then I have created the pool
Code:
root@nuc:/storage/data # zpool list && zpool status storage
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
storage  7.27T  39.6G  7.23T        -         -     0%     0%  1.00x    ONLINE  -
znuc     1.81T  8.91G  1.80T        -         -     0%     0%  1.00x    ONLINE  -
  pool: storage
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            da0p1   ONLINE       0     0     0
            da1p1   ONLINE       0     0     0

errors: No known data errors

Everything looks good :)
 
You didn't use GPT labels tho.
The "recommended" way is to assign a label to the partition and then using that for creation of the zpool.
 
erf, missed that point.
Better ?
Code:
root@nuc:/zstorage # glabel status | grep storage
gpt/storage0     N/A  da0p1
gpt/storage1     N/A  da1p1
Code:
root@nuc:/zstorage # zpool list && zpool status zstorage && zfs get mountpoint | grep zstorage
NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
znuc      1.81T  8.91G  1.80T        -         -     0%     0%  1.00x    ONLINE  -
zstorage  7.27T   880K  7.27T        -         -     0%     0%  1.00x    ONLINE  -
  pool: zstorage
 state: ONLINE
config:

        NAME              STATE     READ WRITE CKSUM
        zstorage          ONLINE       0     0     0
          mirror-0        ONLINE       0     0     0
            gpt/storage0  ONLINE       0     0     0
            gpt/storage1  ONLINE       0     0     0

errors: No known data errors
zstorage                                     mountpoint  /zstorage                                  default
zstorage/data                                mountpoint  /zstorage/data                             default
 
A slightly negative difference in size would refuse the pool to accept the disk, unless it can be forced.
I've read on the MWL books that generally using the force flag on ZFS commands tends to be bad :<
Maybe this might be the exception, if your pools/datasets are setup to reserve 20% of space, for some kind of performance issue that happens when the pool is getting full. I've never understood what this is about
erf, missed that point.
Better ?
Haha don't worry, we are all learning here :)
It looks better now :) Keep in mind encryption options for your dataset, it never hurts.
There are a lot of options, but a simple way is to use

sh:
dd if=/dev/urandom bs=32 count=1 of=/key/location
zfs create -o encryption=on -o keyformat=raw -o keylocation=file:///key/location zroot/encrypted
 
Back
Top