Hi,
Currently I have a zpool that does not work anymore. The name of the pool is zroot. The VM was running in FreeBSD version 14.1 and got in a faulty state when the host hang. ZFS is always better then no ZFS (Dan Langeville), I would like to learn from this broken state.
The gpart show command show the disk ada0 with partion 1 the boot, partition 2 the swap and partition 3 the zfs.
Everything is in one single pool with name zroot, no redundancy, but i have backups. I have lost some data but that's my own problem. I should have taken quicker a backup or used another setup.
Partition 3 the ZFS decrypts without problems with geli.
Zpool import show:
- ZFS filesystem version :5
- ZFS storage pool version: features support (5000)
zpool import zroot gives 4 times failed to load zpool zroot.
Cannot import zroot: I/O error
Destroy or re-create the pool from a backup source.
zdb -e zroot
Shows a lot of data on the screen when it arrives at the dataset zroot/var/log, it shows the ZIL header, object 0, -1, -2, -3. Then it shows the Dnode slots 3 lines of data. Then it shows dmu_object_next() = 97
and then on the next line: Abort trap
When running zdb -ul /dev/ada0p3.eli, then uberblock 0 till 30 are shown.
When doing zpool import -F -T 3647666 -o zroot, which was actually a typo because i did not add any parameter for the -o i do get an explanation:
state: FAULTED
status: The pool metadata is corrupted
action: The poot cannot be imported due to damaged devices or data.
The pool may be active on another system but can be imported using the '-f' flag.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-72
zpool import -f -T 3647666 zroot
cannot import zroot: one or more devices is currently unavailable
1. Could it have helped that I created a checkpoint somewhere in the past?
2. Could a ZIL or ZLOG have helped? Or should I better have used redundancy like a mirror or zraid?
3. zdb crashes on the zroot/var/log a directory used a lot certainly when something goes wrong. I compare ZFS a lot with how databases work like Oracle. So, in this case could ZFS not just rollback to a previous transaction where the zpool was still ok? Why is now everything fault or broke?
4. I have the impression that somewhere something went wrong, but many data are still on that disk. Can absolutely nothing be recovered from that disk? I may be wrong, but I have the impression that because in a tiny spot something went wrong, everything is gone. Perhaps some dataset are still fine in the zpool, why can I not export those?
I did not try the -X option yet. I am waiting for advice of this forum.
Thanks in advance for your help
Currently I have a zpool that does not work anymore. The name of the pool is zroot. The VM was running in FreeBSD version 14.1 and got in a faulty state when the host hang. ZFS is always better then no ZFS (Dan Langeville), I would like to learn from this broken state.
The gpart show command show the disk ada0 with partion 1 the boot, partition 2 the swap and partition 3 the zfs.
Everything is in one single pool with name zroot, no redundancy, but i have backups. I have lost some data but that's my own problem. I should have taken quicker a backup or used another setup.
Partition 3 the ZFS decrypts without problems with geli.
Zpool import show:
- ZFS filesystem version :5
- ZFS storage pool version: features support (5000)
zpool import zroot gives 4 times failed to load zpool zroot.
Cannot import zroot: I/O error
Destroy or re-create the pool from a backup source.
zdb -e zroot
Shows a lot of data on the screen when it arrives at the dataset zroot/var/log, it shows the ZIL header, object 0, -1, -2, -3. Then it shows the Dnode slots 3 lines of data. Then it shows dmu_object_next() = 97
and then on the next line: Abort trap
When running zdb -ul /dev/ada0p3.eli, then uberblock 0 till 30 are shown.
When doing zpool import -F -T 3647666 -o zroot, which was actually a typo because i did not add any parameter for the -o i do get an explanation:
state: FAULTED
status: The pool metadata is corrupted
action: The poot cannot be imported due to damaged devices or data.
The pool may be active on another system but can be imported using the '-f' flag.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-72
zpool import -f -T 3647666 zroot
cannot import zroot: one or more devices is currently unavailable
1. Could it have helped that I created a checkpoint somewhere in the past?
2. Could a ZIL or ZLOG have helped? Or should I better have used redundancy like a mirror or zraid?
3. zdb crashes on the zroot/var/log a directory used a lot certainly when something goes wrong. I compare ZFS a lot with how databases work like Oracle. So, in this case could ZFS not just rollback to a previous transaction where the zpool was still ok? Why is now everything fault or broke?
4. I have the impression that somewhere something went wrong, but many data are still on that disk. Can absolutely nothing be recovered from that disk? I may be wrong, but I have the impression that because in a tiny spot something went wrong, everything is gone. Perhaps some dataset are still fine in the zpool, why can I not export those?
I did not try the -X option yet. I am waiting for advice of this forum.
Thanks in advance for your help