Hello,
I'm running a server with 4xSAS raidz plus 1xNVMe SSD (SAMSUNG MZVLB512HAJQ). The NVMe drive is split in 3 parts:
# zpool status
pool: nvme
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
nvme ONLINE 0 0 0...
The actual capacity of my filesystem is smaller than what I expected/calculated, and I'd like to better understand why. I suspect this may have something to do with some facet of ZFS like ashift or recordsize that I'm forgetting to account for. Please note that I do not believe this is related...
After moving from 10.1 to 10.3 a couple days ago, I noticed the following oddities when looking at zpool status
Spare drive labels no longer appear, and instead present a diskid with their serial:
spares
diskid/DISK-PK2301PBJDDW5T AVAIL
diskid/DISK-PK1334PBHT7VAX AVAIL...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.