Solved Conflicting zpool data

Last week I experienced a major problem after upgrading packages to the latest quarterly which I couldn't even resolve by reverting to an earlier boot environment so I decided to restore the entire system from my backups.

The zfs pool for my system is normally a mirror with two drives. As a precaution against the recovery going wrong I detached one of the drives from the mirror so that I would still have something to fall back on if it failed.

The recovery went OK and I now have the following setup:
Code:
]curlew:/home/mike% zpool status wdssd
  pool: wdssd
 state: ONLINE
config:
 
        NAME          STATE     READ WRITE CKSUM
        wdssd         ONLINE       0     0     0
          gpt/wdsys2  ONLINE       0     0     0
 
errors: No known data errors

curlew:/home/mike% gpart show -l ada0 ada1
=>        40  1953525088  ada0  GPT  (932G)
40        1024     1  wdboot1  (512K)
1064        3032     2  wdefi1  (1.5M)
4096  1953521032     3  wdsys1  (932G)

=>        40  1953525088  ada1  GPT  (932G)
40        1024     1  wdboot2  (512K)
1064        3032     2  wdefi2  (1.5M)
4096  1953521032     3  wdsys2  (932G)
I'm now ready to attach the other drive (gpt/wdsys1) to the pool but I've noticed that there appears to be a stray version of the pool containing the detached drive:
Code:
curlew:/root# zpool import
   pool: wdssd
     id: 8573686173365716461
  state: DEGRADED
status: One or more devices contains corrupted data.
 action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
 config:

        wdssd           DEGRADED
          mirror-0      DEGRADED
            gpt/wdsys1  ONLINE
            gpt/wdsys2  FAULTED  corrupted data
I assume that somehow I need to get rid of this conflicting data before attempting to attach this drive. Is it just a case of zeroing the first and last blocks of the drive with dd or is there more to it than that?
 
I've done a bit more digging. When I boot into the live cd of an installation memstick I see two different wdssd pools:
Code:
   pool: wdssd
     id: 12297606765705698791
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        wdssd         ONLINE
          gpt/wdsys2  ONLINE

   pool: wdssd
     id: 8573686173365716461
  state: DEGRADED
status: One or more devices contains corrupted data.
 action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
 config:

        wdssd           DEGRADED
          mirror-0      DEGRADED
            gpt/wdsys1  ONLINE
            gpt/wdsys2  FAULTED  corrupted data
So it looks like I somehow need to remove the one with id 8573686173365716461
 
You should consider, which pool contains actual data. Then you should destroy unnecessary pool and attach disk to actual.
 
You should consider, which pool contains actual data. Then you should destroy unnecessary pool and attach disk to actual.
I was concerned that both pools had the same name. Will it work if I specify the ID instead of the name. E.g. would zpool destroy 8573686173365716461 get rid of the degraded pool and leave the working one (12297606765705698791) intact?
 
Have a look a the output of
zpool list -v
&
zpool status -x
&
zpool status

Then detach & destroy the zpool by id number.
Then you can attach again by name.
 
I was concerned that both pools had the same name. Will it work if I specify the ID instead of the name. E.g. would zpool destroy 8573686173365716461 get rid of the degraded pool and leave the working one (12297606765705698791) intact?
No.
You should import pool and then destroy it:
Code:
# zpool import <ID> <tmp_name> && zpool destroy <tmp_name>
After, you can attach released device from destroyed pool to actual pool
Code:
# zpool attach <old_dev> <new_dev> <pool_name>
 
No.
You should import pool and then destroy it:
Code:
# zpool import <ID> <tmp_name> && zpool destroy <tmp_name>
After, you can attach released device from destroyed pool to actual pool
Code:
# zpool attach <old_dev> <new_dev> <pool_name>
Thanks, that fixed it and everything's fine now.

But there was a typo in your example for attaching, with the parameters in the wrong order, it should be:
Code:
# zpool attach <pool_name> <old_dev> <new_dev>
 
Back
Top