Recovering deleted files from zfs pool.

if the files are compressed it won't work
but maybe uncompressible files like jpg will be just stored even if data compression is turned on.
these tools don't bother with file system, they deal with block devices. also if you have a raidz it might be a problem if you don't have a /dev/zpool/somedevice like you have the volume in raid5 setups.
they just assume that a file starts at a beginning of block and perform some sort of file(1) analysis and see if its a desired file type
if they detect lets say jpeg they try to determine the size in bytes from the header and then copy the number of blocks that cover that number of bytes from the media to the rescue dir
more or less thats all
 
he can probably test on another machine / disk with same compression settings on smaller pool to see if something can be rescued
if it works like this its worth to continue
 
While others focused on possibility of actual restore (which is fair, that's what you asked) I'm more interested in "how" of this problem. I'd be pretty angry not to have answer why my data disappear.
With the information you shared it does seem data was really "just" deleted. When I used ZFS between Linux and ZFS I tried to (not always possible /means convinient really/) to import foreign pool read only.

On promox side I'd look at cron and systemd timers (systemctl list-timers). Misconfigured systemd-tmpfiles-clean would be worth checking out.

Does the container have full rw access to the data? Any chance of some purge job running there?
 
In the simplest case, one can rsync the desired files or directories from the .zfs/snapshot/.../whatever/path/you/want to any destination
You are indeed misunderstanding my point. I'll keep it a bit short because it's kinda offtopic within the context of the OP's issue but the main point is this... If you use zfs-send(8) to make a backup of a dataset and then save that as an image file then you cannot access said image file's contents without first (re)importing the data back into a ZFS pool; hence zfs-receive(8).

This is different from restore which allows you to access image files, even interactively to simply select what to extract. Even if said image file is on a remote storage and even if the target filesystem isn't even UFS.

So, for example: I have an image file called home.zfs which is a backup of my dataset. I'm using a FreeBSD environment which has support for ZFS but uses UFS as its own main filesystem. At this point it is impossible for me to access my ZFS backup in any way without first setting up some kind of ZFS pool to import said dataset in. However... if I had a file called home.ufs which I made with dump then I wouldn't have this problem, because restore would be able to access said image file and retrieve my data, even if I were using ZFS as my main filesystem.
 
If you use zfs-send(8) to make a backup of a dataset and then save that as an image file then you cannot access said image file's contents without first (re)importing the data back into a ZFS pool; hence zfs-receive(8).
Thanks for explaining. Yes, I think I see what you mean now. However, when I do a zfs-send I do it from a snapshot. And then I generally don't delete that snapshot if there's no pressing reason to do so, like low disk space. And likewise, on the receive side (which is a different machine for you? Or the same machine?), I don't store ZFS filesystems as images, I receive the snapshot into a backup filesystem on the (separate) backup machine. So if the production machine is healthy, I have all the local snapshots available to restore any damaged/deleted files. If the trouble is more severe, I have the entire filesystem (and all of its snapshots) available on the remote backup machine.

But yes, I think I understand your methods better now. Thanks!
 
Back
Top