ZFS Interesting ZFS behavior after 10.3 upgrade

After moving from 10.1 to 10.3 a couple days ago, I noticed the following oddities when looking at zpool status

Spare drive labels no longer appear, and instead present a diskid with their serial:

Code:
    spares
     diskid/DISK-PK2301PBJDDW5T  AVAIL
     diskid/DISK-PK1334PBHT7VAX  AVAIL
     diskid/DISK-PK2301PCHLHA5B  AVAIL

My pool used for storage of backups now complains the blocksize is smaller than native (which is true, I selected 512 byte blocks instead of 4K as this is a backup server with tons of tiny files) but I'm confused as to why it's complaining now. Were there changes in ZFS between 10.1 and 10.3 that complaining if not aligned is now the default behavior?

Code:
state: ONLINE
status: One or more devices are configured to use a non-native block size.
    Expect reduced performance.
action: Replace affected devices with devices that support the
    configured block size, or migrate data to a properly configured
    pool.

config:

    NAME                          STATE     READ WRITE CKSUM
    tank                          ONLINE       0     0     0
     raidz3-0                    ONLINE       0     0     0
       label/1-1                 ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/1-2                 ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/1-3                 ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/1-4                 ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/1-5                 ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/1-6                 ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/1-7                 ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/1-8                 ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/1-9                 ONLINE       0     0     0  block size: 512B configured, 4096B native
     raidz3-1                    ONLINE       0     0     0
       label/2-1                 ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/2-2                 ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/2-3                 ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/2-4                 ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/2-5                 ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/2-6                 ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/2-7                 ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/2-8                 ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/2-9                 ONLINE       0     0     0  block size: 512B configured, 4096B native
     raidz3-2                    ONLINE       0     0     0
       label/1-10                ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/1-11                ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/1-12                ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/1-13                ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/1-14                ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/1-15                ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/2-11                ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/2-12                ONLINE       0     0     0  block size: 512B configured, 4096B native
       label/2-13                ONLINE       0     0     0  block size: 512B configured, 4096B native

I do understand there is a performance impact, but as this is really long-term storage with tons of tiny files and there's rarely if ever an IO bottleneck, I believe it's an acceptable trade-off. It'd be nice if there were a sysctl variable or something to control the showing of this "warning", as the 512B size is expected.
 
I consider it a complaint because zpool status now reports there is an action to be taken:

Code:
status: One or more devices are configured to use a non-native block size.
    Expect reduced performance.
action: Replace affected devices with devices that support the
    configured block size, or migrate data to a properly configured
    pool.

Also I'd contend this pool is configured properly, specifically for my scenario making the suggested action invalid. Is the pool still healthy? Yes. But it's off-putting to see new, what appeared to be "error" messages that did not exist before an upgrade, when nothing else had changed. It was also puzzling initially that the spare labels had disappeared/changed, though that didn't actually impact anything.

It would be helpful if there were a method to disable this specific status/action from zpool status when we've purposefully set non-native block sizes, and preferably the complaint about block size to the right of the disks as well.

(I'd like to point out this was an in-place production upgrade of 10.1 to 10.3, as it wasn't clear in my previous post)
 
I'm curious here - what is the underlying behaviour you expected to have as a result of setting small block size? zfs is complaining rightly that even though you have 512B blocks, it is still only able to write in presumably 4KiB blocks to the underlying device, and therefore generating additional load and writes.

Is there an advantage e.g. at the capacity level that you gain by using the smaller block size? Would the use of the embedded_data zpool property give you the same space advantage, whilst still using the larger block sizes?
 
Yes, the ZFS code was updated to detect this mis-match in block sizes and to warn admins about it so that they can either ignore the message and carry on, or fix the underlying issue.

Unfortunately, there's no way to "disable" or "remove" the warning message, and is something you either need to accept and ignore, or fix.
 
Back
Top