Hard Drive Super Block Failed

Hello,

Beforehand, sorry if the formatting of this post is not fully correct as I want to get it posted and then edit later so that the format is better.

I have managed to get myself into a situation where it looks like my hard drive superblock has become corrupted somehow and now some of my files are not showing up.

As of right now, I'm pulling everything off that I currently can so I can at least salvage some if there is nothing I can do.

I've ran some commands to see if I can locate the app.py file that I'm looking for at it shows up in that list

"locate app.py"
This command does bring up the app.py in the correct directory, but when I "ls" it does not show up.

Running, "fsck /dev/ada0" brings up:
Code:
Attempted recovery for standard superblock: failed
Attempted extraction of recovery data from standard superblock: failed
Attempt to find boot zone recovery data.
Finding an alternate superblock failed.
Check for only non-critical errors in standard superblock
Failed, superblock has critical errors
SEARCH FOR ALTERNATE SUPER-BLOCK FAILED. YOU MUST USE THE
-b OPTION TO FSCK TO SPECIFY THE LOCATION OF AN ALTERNATE
SUPER-BLOCK TO SUPPLY NEEDED INFORMATION; SEE fsck_ffs(8).

I've never been in this situation, so if you have other diagnostic commands, and some advice on how to proceed, please let me know!

Hopefully this is not an always keep a backup lesson😟
 
/dev/ada0 typically doesn't contain a filesystem, the partitions within the disk do. What does gpart show ada0 show?
 
Ah I see, my "problem" is with ada0 but I'll show both of them.

gpart show outputs the two devices I have connected
Code:
=>        40  1953525088  ada0  GPT  (932G)
          40        1024     1  freebsd-boot  (512K)
        1064         984        - free -  (492K)
        2048     4194304     2  freebsd-swap  (2.0G)
     4196352  1949327360     3  freebsd-zfs  (930G)
  1953523712        1416        - free -  (708K)

=>       40  234441568  ada1  GPT  (112G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048    4194304     2  freebsd-swap  (2.0G)
    4196352  230244352     3  freebsd-zfs  (110G)
  234440704        904        - free -  (452K)

Also after looking around yesterday I figured out that fsck does not work with ZFS... could probably be another reason why it looked like everything bad happened.
If I'm not wrong it seems like I need to use zpool

zpool list shows
Code:
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot  1.01T  2.47G  1.01T        -         -     0%     0%  1.00x    ONLINE  -

zpool status shows
Code:
  pool: zroot
 state: ONLINE
  scan: scrub repaired 0B in 00:00:11 with 0 errors on Sat Feb 22 16:01:14 2025
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          ada1p3    ONLINE       0     0     0
          ada0p3    ONLINE       0     0     0

errors: No known data errors
Nothing here jumps out to me as out of the ordinary.

As you can see I performed a scrub operation yesterday, that took basically no time and found 0 Bytes to repair, which I'm assuming is a good sign. I also took the time yesterday to drag important stuff off of the drive onto another, and when I was looking through my directories only one file which happened to be one of my main application files(of course!) was gone. I ended up getting it back with a recovery tool, so I ended up not loosing anything.

So I was interfacing my server with sshfs on my laptop, on a local connection. Making edits directly to different files and just using a separate ssh connection to mostly just start and stop services. I'm a bit puzzled in how this file disappeared, could have been two ways, I deleted it on my sshfs mount point or rm it over ssh. I don't remember doing either of those.

Looking at the history of my sh shell I don't see anything, cat ~/.sh_history | grep rm returns nothing, along with root's history(which I was not on in either session).

Thanks!
 
I've ran some commands to see if I can locate the app.py file that I'm looking for at it shows up in that list

"locate app.py"
This command does bring up the app.py in the correct directory, but when I "ls" it does not show up.
Don't rely on locate(1) on finding files. The utility showing the file doesn't mean it is present on the file system. A search query shows results read from a database, recorded before from periodic searches, not from a real time file system search.
Rich (BB code):
DESCRIPTION
     The locate program searches a database for all pathnames which match the
     specified pattern.  The database is recomputed periodically (usually
     weekly or daily), and contains the pathnames of all files which are
     publicly accessible.


BUGS
     The locate program may fail to list some files that are present, or may
     list files that have been removed from the system.  This is because
     locate only reports files that are present in the database, which is
     typically only regenerated once a week by the
     /etc/periodic/weekly/310.locate script.  Use find(1) to locate files that
     are of a more transitory nature.

I ended up getting it back with a recovery tool, so I ended up not loosing anything.
Could you share which tool this was?

As for the issue: if there are snapshots of the dataset perhaps zfs-diff(8) would give some clues when what happened.
 
Ah okay thanks for the info on locate.

I used a tool on a different machine OS that I just moved the drive to. I wanted to make sure I grabbed the file before messing with it in single user mode.

Typing in the command zfs list -t snapshot outputs no datasets. I have never setup any snapshots for the ZFS. I'm going to look into the snapshots and probably set it up as it sounds like it could be useful.

I have everything back and running right now and it looks like the drive is fine. Not sure if there is anything else I can check, otherwise I guess it was accidental deletion it when I had the directory open in VScode over sshfs.
 
I'm afraid there's nothing to recover:

Code:
zroot 1.01T 2.47G 1.01T

The pool is almost empty.

Nothing here jumps out to me as out of the ordinary.
I am of the opposite opinion. Your output shows a non-redundant root pool:

Code:
pool: zroot
state: ONLINE
scan: scrub repaired 0B in 00:00:11 with 0 errors on Sat Feb 22 16:01:14 2025
config:

NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0

consisting of disks of different sizes:

Code:
=> 40 1953525088 ada0 GPT (932G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 4194304 2 freebsd-swap (2.0G)
4196352 1949327360 3 freebsd-zfs (930G)
1953523712 1416 - free - (708K)

=> 40 234441568 ada1 GPT (112G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 4194304 2 freebsd-swap (2.0G)
4196352 230244352 3 freebsd-zfs (110G)
234440704 904 - free - (452K)

If this setup wasn't planned, something bad happened here.
 
No I have not really planned out this storage setup, this is the first time I’ve looked into the setup. I am now going to read about redundancy and setting up RAID-Z as I have a handful of drives to use.

Should I not be using a pool on root?

Do the two drives have a bootable system on them?
 
My guess is that the larger drive was your data (home) drive, and you accidentally overwrote it by extending root pool onto it. If this has happened, only advanced ZFS recovery can help.
 
Back
Top