Last week I experienced a major problem after upgrading packages to the latest quarterly which I couldn't even resolve by reverting to an earlier boot environment so I decided to restore the entire system from my backups.
The zfs pool for my system is normally a mirror with two drives. As a precaution against the recovery going wrong I detached one of the drives from the mirror so that I would still have something to fall back on if it failed.
The recovery went OK and I now have the following setup:
I'm now ready to attach the other drive (gpt/wdsys1) to the pool but I've noticed that there appears to be a stray version of the pool containing the detached drive:
I assume that somehow I need to get rid of this conflicting data before attempting to attach this drive. Is it just a case of zeroing the first and last blocks of the drive with
The zfs pool for my system is normally a mirror with two drives. As a precaution against the recovery going wrong I detached one of the drives from the mirror so that I would still have something to fall back on if it failed.
The recovery went OK and I now have the following setup:
Code:
]curlew:/home/mike% zpool status wdssd
pool: wdssd
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
wdssd ONLINE 0 0 0
gpt/wdsys2 ONLINE 0 0 0
errors: No known data errors
curlew:/home/mike% gpart show -l ada0 ada1
=> 40 1953525088 ada0 GPT (932G)
40 1024 1 wdboot1 (512K)
1064 3032 2 wdefi1 (1.5M)
4096 1953521032 3 wdsys1 (932G)
=> 40 1953525088 ada1 GPT (932G)
40 1024 1 wdboot2 (512K)
1064 3032 2 wdefi2 (1.5M)
4096 1953521032 3 wdsys2 (932G)
Code:
curlew:/root# zpool import
pool: wdssd
id: 8573686173365716461
state: DEGRADED
status: One or more devices contains corrupted data.
action: The pool can be imported despite missing or damaged devices. The
fault tolerance of the pool may be compromised if imported.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
config:
wdssd DEGRADED
mirror-0 DEGRADED
gpt/wdsys1 ONLINE
gpt/wdsys2 FAULTED corrupted data
dd
or is there more to it than that?