ZFS DD'ing 512b sector hard drive to ashift 12 (4Kb sector) ZFS volume

Ok, done with the edits. I was looking straight at it - although I have no idea why this is happening: the sysctl for kern.hostid hangs for some reason. It might be a deadlock or something, it might be something else.
I can replicate this once the zpool import hangs, a simple sysctl kern.hostid also hangs. This is annoying, it doesn't really stay contained to ZFS apparently, it's further "up the chain" somehow.
Anyone know any kernel developers by chance? :D
 
It seems I didn't get any further using lldb or probably any other debugger. The process just hangs and the debugger disconnects. It's deadlocking on something outside of ZFS for some odd reason.
Eric A. Borisch You seem to be quite knowledgeable, you wouldn't know any ways to find out what exactly is happening?
 
Maybe this is a dumb question but why would you want do this and why do you think it would work? Wouldn't padding blocks with zeroes just result in bad data when reconstituting information across block boundaries?

Just dd the disk to a file on the zfs volume and mount the file as a block device?
 
Maybe this is a dumb question but why would you want do this and why do you think it would work? Wouldn't padding blocks with zeroes just result in bad data when reconstituting information across block boundaries?

Just dd the disk to a file on the zfs volume and mount the file as a block device?
The whole padding thing was a misunderstanding on my behalf, it might have gotten lost in all of the text.

I am aware that I can dd the disk to a file and store it on a dataset (not a volume, or at least not without mounting as a filesystem). I am however inclined to think that this should be possible and I'm simply missing a very simple workaround or something. Or maybe there's a bug in FreeBSD, in which case it's always best to be brought to the attention of developers. Hence why I'm trying to do my due diligence and find out where the problem is situated in the code.
 
You need to do kernel debugging.

But I will again point at that the developers clearly knew this may not work (labeled with a warning on the sysctl’s description) yet chose to ship in that state for years now.

As such, I doubt it is a simple bug to be fixed, but more likely a fundamental (touches many subsystems) issue with how FreeBSD handles devices (and therefore zvols). There is also a simple work-around (use a file-based image) with minimal drawbacks.

My suggestion: only proceed if you are doing this to learn kernel development and debugging. The cost / reward ratio is high (significant changes for minor new feature), based on the fact that it’s been left in this not-that-useful state. (Likely left at all to avoid having even more #ifdef blocks separating Linux/FreeBSD code in OpenZFS.)
 
this should be possible
What is "this" though? "Promote" 512 to 4k without messing with the byte order of all the files on the disk at all? If so, I don't think that's going to work. ZFS clearly relies on the sector size which is going to change internal structures and it shouldn't actually matter to the user anyway because ZFS is going to shuffle things around on disk, so even if it stayed at 512, ZFS might move whatever you're relying to stay at the same absolute position on the disk.

Why does it matter that that byte order of the files remain untouched?
 
What is "this" though? "Promote" 512 to 4k without messing with the byte order of all the files on the disk at all? If so, I don't think that's going to work. ZFS clearly relies on the sector size which is going to change internal structures and it shouldn't actually matter to the user anyway because ZFS is going to shuffle things around on disk, so even if it stayed at 512, ZFS might move whatever you're relying to stay at the same absolute position on the disk.
You must have missed the few times I said this isn't an issue. It was a misunderstanding from my side. ZVOL's devices present a 512 byte block size, just like many 4K drives present themselves as 512-byte drives. I was wrong on that account and was trying to fix an issue that did not exist.
 
Ok, I sort of see that and now this thread is about recovery like the other one. Can't you just copy the files off and ignore the zpool import?
Technically this thread is now about recursively mounting ZFS volumes. I know, it's getting confusing. But no, it's not about the files which were on the dataset, it's about the files (and versions thereof) that are kept inside earlier snapshots. But if you do have questions or remarks about that please go to the corresponding thread Thread undestroy-destroyed-zfs-dataset-snapshots.89611/before things get even more confusing.
 
Technically this thread is now about recursively mounting ZFS volumes.
Have you tried creating a zpool and datasets on a zvol by hand instead of with your dd of the old device? I think that would be an interesting datapoint and would help rule out "what" is being dd'd is contributing to the problem.
 
Have you tried creating a zpool and datasets on a zvol by hand instead of with your dd of the old device? I think that would be an interesting datapoint and would help rule out "what" is being dd'd is contributing to the problem.
Thank you for reminding me, that was indeed one of the things I still had to test. In fact, it'd be great if someone else could do a quick test as well (it's rather easy and fast) to see if it's not something on my local machine - since I compiled from source with optimizations and also use a custom kernel. But first the results: on my system(s) this new pool also freezes at import. During creation the pool "imports" well, but after exporting (or rebooting) I can't import it again without hanging the command. Cautiously I would say that this behaviour suggests that it's not the recursion in itself that's causing this, but rather a difference in code/behaviour between creation and importing.

In order to replicate you'll need to do the following (as root...):
1. sysctl vfs.zfs.vol.recursive=1: do not omit this, or you will get unhelpful error message "cannot create '<pool>': no such pool or dataset"
2. zfs create -s -V 1g <pool>/<zvol>
3. zpool create <pool> /dev/zvol/<zvol>
4. zpool export <pool>
BEFORE doing step 5, make sure it's ok to reboot your pc! This command will likely hang and stop you from using sysctl and possibly further zfs or zpool operations.
5. zpool import <pool>

I myself was busy testing a different hypothesis (amongst other stuff), which turned out false btw. I had the idea when I read about dd'ing CD's to an iso image, which required a set blocksize of 2048. I was thinking along the line that maybe that same reasoning had still messed up the copied pool since I used a larger blocksize in order to copy faster, but alas, no difference with bs=512 except a much longer copy time. Another hypothesis scrapped.
 
During creation the pool "imports" well, but after exporting (or rebooting) I can't import it again without hanging the command.
Earlier in this thread I think you implied importing readonly would work on your dd'd version. Have you tried that with the hand created version, just as a data point?
 
What do they call this? X Y problem? Are you trying to recursively mount a ZFS volume or....
  • Are you trying to move your snapshots to an undamaged volume?
  • Just trying to extract one or more snapshots?
  • Are you having problems because you put the dd image on a ZFS volume instead of attaching it from somewhere else so ZFS doesn't throw a fit about recursion?
 
I think the OP is:
I have working zpool, datasets, I create a zvol. On that zvol I create a zpool, zfs datasets and then on the host I zpool import read/write. So a zpool created on a zvol on an initial zpool on the hardware.

The dd aspect was something else, go back to posts 38 and 39 OP did not do the dd, just manually created the zpool and new datasets.

I'm not sure if this would be recursion, but at least to me it's like "swap on ZFS file".
 
I'm thinking recursion by looking at step 2 on #39. The manpage for zfs-create() doesn't mention specifying pools at all, presumably because of the dangerous setting? Also since that step should vend a zvol, the <zvol> placeholder in the example for step 3 doesn't specify if it's the one used step 2 or the one vended in step 2. Where am I supposed to get the <zpool>/<zvol> parameters for step #2 in this experiment, anyway? It's pretty clear why this is a "dangerous" setting if all of this occurs on the same physical disk.

My process is "zpool create" out of GPT-labeled freebsd-zfs slice/partition/whatever and _then_ "zfs create" the dataset inside the pool.

Anyway, since it probably is recursion, that's why I'm asking what for.
 
  • Like
Reactions: mer
Anyway, since it probably is recursion, that's why I'm asking what for.

This topic is still about dd'ing a physical disk - which in my case does contain a freebsd-zfs partition - onto a ZFS volume. The entire premise is that as far as I'm concerned a ZFS volume is the equivalent of a virtual disk, with all that it entails. Thus anything that is possible on a physical disk - according to my premise - should be possible on a ZVOL was well. However, after my misunderstandings it became obvious it wasn't such a simple task as I had expected.
The idea isn't just academical or anything, it does indeed fit into a story of data recovery (see other thread). While of course this is perfectly possible with a file-based disk image or a physical disk copy of said image I don't consider the idea to be far-fetched. A ZVOL is better performing, especially when writes are concerned. Random access will most likely be faster as well. All of these are useful when recovering data, which may entail a lot of random access.
Earlier in this thread I think you implied importing readonly would work on your dd'd version. Have you tried that with the hand created version, just as a data point?
I have not, correct. I'm assuming (which is always risky) that the behaviour wouldn't be worse with the manually created zpool compared to the dd'ed one. This might be wrong though, I won't know for sure until I test it.
 
Back
Top