ZFS zpool import: vdev problem, ereport.fs.zfs.vdev.no_replicas

After a power loss, the pool is not mounted.
Pool "mytape" was created on pools "data1" and "storage"
Code:
2022-07-17.09:23:05 zfs create -V37G -o volmode=dev storage/disk0
2022-07-17.09:32:51 zfs create -V37G -o volmode=dev storage/disk1
...skip...
Code:
2022-07-17.09:32:57 zfs create -V37G -o volmode=dev data1/disk9
10 volumes, raidz 2
create mytape raidz2 /dev/zvol/storage/disk0 ...skip... /dev/zvol/data1/disk9

Code:
bsd# freebsd-version -kru ; uname -aKU
12.3-RELEASE-p7
12.3-RELEASE-p7
12.3-RELEASE-p7
FreeBSD bsd.userland 12.3-RELEASE-p7 FreeBSD 12.3-RELEASE-p7 releng/12.3-n234235-97e1662bba3 myGENERIC  amd64 1203000 1203000

Code:
bsd# zpool import -d /dev/zvol/data1/ -d /dev/zvol/storage/  mytape
cannot import 'mytape': no such pool or dataset
        Destroy and re-create the pool from
        a backup source.
In log see
Code:
Oct  3 18:07:52 bsd ZFS[41678]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$8277030612853428934
Oct  3 18:07:52 bsd ZFS[41679]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$11096902353232530823
Oct  3 18:07:52 bsd ZFS[41680]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$7434180512471389134
Oct  3 18:07:52 bsd ZFS[41681]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$4764696657518669444
Oct  3 18:07:52 bsd ZFS[41682]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$219400737459580364
Oct  3 18:07:52 bsd ZFS[41683]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$16863038228179722503
Oct  3 18:07:52 bsd ZFS[41684]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$9228459722912190556
Oct  3 18:07:52 bsd ZFS[41685]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$10562902542940281298
Oct  3 18:07:52 bsd ZFS[41686]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$17632847707799216407
Oct  3 18:07:52 bsd ZFS[41687]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$4847822088517487670
Oct  3 18:07:52 bsd ZFS[41688]: vdev problem, zpool=$mytape path=$ type=$ereport.fs.zfs.vdev.no_replicas
Oct  3 18:07:52 bsd kernel: Oct  3 18:07:52 bsd ZFS[41688]: vdev problem, zpool=$mytape path=$ type=$ereport.fs.zfs.vdev.no_replicas
Oct  3 18:07:52 bsd ZFS[41689]: failed to load zpool $mytape
Oct  3 18:07:52 bsd kernel: Oct  3 18:07:52 bsd ZFS[41689]: failed to load zpool $mytape
Try force mount
Code:
bsd# zpool import -d /dev/zvol/data1/ -d /dev/zvol/storage/ -o readonly=on -f -R /mnt/tmp/ -F  mytape
cannot import 'mytape': no such pool or dataset
        Destroy and re-create the pool from
        a backup source.
In log see
Code:
Oct  3 20:44:31 bsd ZFS[42617]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$8277030612853428934
Oct  3 20:44:31 bsd ZFS[42618]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$11096902353232530823
Oct  3 20:44:31 bsd ZFS[42619]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$7434180512471389134
Oct  3 20:44:31 bsd ZFS[42620]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$4764696657518669444
Oct  3 20:44:31 bsd ZFS[42621]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$219400737459580364
Oct  3 20:44:31 bsd ZFS[42622]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$16863038228179722503
Oct  3 20:44:31 bsd ZFS[42623]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$9228459722912190556
Oct  3 20:44:31 bsd ZFS[42624]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$10562902542940281298
Oct  3 20:44:31 bsd ZFS[42625]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$17632847707799216407
Oct  3 20:44:31 bsd ZFS[42626]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$4847822088517487670

zdb -l showing all labels on all 10 volumes, for example
Code:
bsd# zdb -l /dev/zvol/storage/disk0
------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'mytape'
    state: 0
    txg: 4
    pool_guid: 7537818623386828728
    hostid: 3980147389
    hostname: 'bsd.userland'
    top_guid: 6387727796615765617
    guid: 8277030612853428934
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 6387727796615765617
        nparity: 2
        metaslab_array: 76
        metaslab_shift: 31
        ashift: 13
        asize: 397237288960
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 8277030612853428934
            path: '/dev/zvol/storage/disk0'
            whole_disk: 1
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 11096902353232530823
            path: '/dev/zvol/storage/disk1'
            whole_disk: 1
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 7434180512471389134
            path: '/dev/zvol/storage/disk2'
            whole_disk: 1
            create_txg: 4
        children[3]:
            type: 'disk'
            id: 3
            guid: 4764696657518669444
            path: '/dev/zvol/data1/disk3'
            whole_disk: 1
            create_txg: 4
        children[4]:
            type: 'disk'
            id: 4
            guid: 219400737459580364
            path: '/dev/zvol/data1/disk4'
            whole_disk: 1
            create_txg: 4
        children[5]:
            type: 'disk'
            id: 5
            guid: 16863038228179722503
            path: '/dev/zvol/data1/disk5'
            whole_disk: 1
            create_txg: 4
        children[6]:
            type: 'disk'
            id: 6
            guid: 9228459722912190556
            path: '/dev/zvol/data1/disk6'
            whole_disk: 1
            create_txg: 4
        children[7]:
            type: 'disk'
            id: 7
            guid: 10562902542940281298
            path: '/dev/zvol/data1/disk7'
            whole_disk: 1
            create_txg: 4
        children[8]:
            type: 'disk'
            id: 8
            guid: 17632847707799216407
            path: '/dev/zvol/storage/disk8'
            whole_disk: 1
            create_txg: 4
        children[9]:
            type: 'disk'
            id: 9
            guid: 4847822088517487670
            path: '/dev/zvol/data1/disk9'
            whole_disk: 1
            create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
------------------------------------
LABEL 1
------------------------------------
    version: 5000
... skip ...

How to mount pool?
I don't have any backups 8-/
 
zpool create mytape raidz2 /dev/zvol/storage/disk0 /dev/zvol/storage/disk1 /dev/zvol/storage/disk2 /dev/zvol/storage/disk3 /dev/zvol/data1/disk4 /dev/zvol/data1/disk5 /dev/zvol/data1/disk6 /dev/zvol/data1/disk7 /dev/zvol/storage/disk8 /dev/zvol/data1/disk9
 
dd if=/dev/zvol/storage/disk0 of=vol0 bs=1M
And so on with others vdev...
Copy on another host.

Code:
backups# freebsd-version -kru ; uname -aKU
12.3-RELEASE-p6
12.3-RELEASE-p6
12.3-RELEASE-p7
FreeBSD backups.userland 12.3-RELEASE-p6 FreeBSD 12.3-RELEASE-p6 GENERIC  amd64 1203000 1203000
backups# ll
total 388049845
-rw-r--r--  1 nobody  nobody  39728447488 Aug 29 15:48 vol0
-rw-r--r--  1 nobody  nobody  39728447488 Aug 29 16:34 vol1
-rw-r--r--  1 nobody  nobody  39728447488 Aug 29 19:00 vol2
-rw-r--r--  1 nobody  nobody  39728447488 Aug 29 20:12 vol3
-rw-r--r--  1 nobody  nobody  39728447488 Aug 29 21:13 vol4
-rw-r--r--  1 nobody  nobody  39728447488 Aug 30 08:48 vol5
-rw-r--r--  1 nobody  nobody  39728447488 Aug 30 09:54 vol6
-rw-r--r--  1 nobody  nobody  39728447488 Aug 30 13:41 vol7
-rw-r--r--  1 nobody  nobody  39728447488 Aug 30 16:04 vol8
-rw-r--r--  1 nobody  nobody  39728447488 Aug 30 15:08 vol9
backups# zpool import -d .  mytape
cannot import 'mytape': pool was previously in use from another system.
Last accessed by bsd.userland (hostid=ed3c3abd) at Thu Jan  1 03:00:00 1970
The pool can be imported, use 'zpool import -f' to import the pool.
backups# zpool import -d . -f mytape
cannot import 'mytape': invalid vdev configuration
In logs
Code:
Oct  4 12:08:04 backups ZFS[2253]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$8277030612853428934
Oct  4 12:08:04 backups ZFS[2254]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$11096902353232530823
Oct  4 12:08:04 backups ZFS[2255]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$7434180512471389134
Oct  4 12:08:04 backups ZFS[2256]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$4764696657518669444
Oct  4 12:08:04 backups ZFS[2257]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$219400737459580364
Oct  4 12:08:04 backups ZFS[2258]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$16863038228179722503
Oct  4 12:08:04 backups ZFS[2259]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$9228459722912190556
Oct  4 12:08:04 backups ZFS[2260]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$10562902542940281298
Oct  4 12:08:04 backups ZFS[2261]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$17632847707799216407
Oct  4 12:08:04 backups ZFS[2262]: vdev state changed, pool_guid=$7537818623386828728 vdev_guid=$4847822088517487670
Oct  4 12:08:04 backups ZFS[2263]: vdev problem, zpool=$mytape path=$ type=$ereport.fs.zfs.vdev.no_replicas
Oct  4 12:08:04 backups kernel: Oct  4 12:08:04 backups ZFS[2263]: vdev problem, zpool=$mytape path=$ type=$ereport.fs.zfs.vdev.no_replicas
Oct  4 12:08:04 backups ZFS[2264]: failed to load zpool $mytape
Oct  4 12:08:04 backups kernel: Oct  4 12:08:04 backups ZFS[2264]: failed to load zpool $mytape
 
I've been trying to do the same thing. I am unable to even create the pool. I thnk you just can't do that since it would make it nested pools. I am surprised you managed to create a pool like this...
 
Code:
bsd# zpool clear mytape
cannot open 'mytape': no such pool
bsd# zpool clear mytape /dev/zvol/storage/disk0 /dev/zvol/storage/disk1 /dev/zvol/storage/disk2 /dev/zvol/data1/disk3 /dev/zvol/data1/disk4 /dev/zvol/data1/disk5 /dev/zvol/data1/disk6 /dev/zvol/data1/disk7 /dev/zvol/storage/disk8 /dev/zvol/data1/disk9
too many arguments
usage:
        clear [-nF] <pool> [device]
bsd# zpool clear mytape /dev/zvol/storage/disk0
cannot open 'mytape': no such pool
bsd# zpool clear mytape /dev/zvol/storage/disk1
cannot open 'mytape': no such pool
bsd# zpool clear mytape /dev/zvol/storage/disk2
cannot open 'mytape': no such pool

I've been trying to do the same thing. I am unable to even create the pool. I thnk you just can't do that since it would make it nested pools. I am surprised you managed to create a pool like this...
ok, move to another host... all devices regular files, see post #4
 
Back
Top