Hello, I have a physical FreeBSD server I installed using ZFS on root. To ease maintenance I'd like to move it into a virtual machine (VMware ESXi), but since virtualizing ZFS is not recommended, I would like to convert it to UFS. Here's my plan so far, can anyone confirm whether I am on the right track?
1. Create a new UFS disk image:
2. Create a ZFS snapshot of
3. Convert the raw image to VMDK:
The advantage of this approach is I will (hopefully) be able to use the exact kernel (custom) and userland versions of FreeBSD that I have on the physical machine, rather than reinstalling from scratch and copying over the data. Basically, I want the virtual system to be as identical as possible to the physical machine, except on UFS instead of ZFS. Only the root filesystem, so I can boot it and tinker virtually before deploying. Sounds reasonable, no? Except I'm running into a few problems I don't understand:
To create the UFS image for the ZFS root, I first looked at
When I actually copy the data using the commands above, it is much larger. (It also copies very slowly, presumably because I'm storing the image on the physical disks I'm copying from so it has to read and write back, through the disk image layer). At this point, it is still copying but is 24 GB and growing. Can this discrepancy be attributed to additional "slack" in the UFS filesystem versus ZFS? Is it possible to get a better estimate of how big of an image I should allocate? Or am I copying the data incorrectly somehow?
Secondly, piping tar with the -p flag preserves permissions, but it seems to choke on sockets and other special files:
Is there a more accurate tool to copy files from ZFS to UFS? My first choice would be dump(8)/restore(8), but of course that is only for UFS.
Finally, does it make sense what I'm trying to do? I searched around for "convert ZFS to UFS" but all the results seem to be about people going in the opposite direction, from UFS to ZFS. Thanks in advance for any advice and suggestions, and I hope this is the appropriate section.
1. Create a new UFS disk image:
dd if=/dev/zero of=ufsroot bs=`expr 1024 \* 1024 \* 1024` count=50
mdconfig -f ufsroot -u 0
bsdlabel -w md0 auto
newfs -U md0a
mount /dev/md0a /mnt
2. Create a ZFS snapshot of
/
and copy to the UFS image: zfs snapshot zroot/ROOT/default@mysnapshot
(cd /.zfs/snapshot/mysnapshot ; tar cf - *) | (cd /mnt; tar xvfp -)
3. Convert the raw image to VMDK:
umount /mnt
qemu-img convert -f raw -O vmdk ufsroot ufsroot.vmdk
# then import it into a new machine in VMware
The advantage of this approach is I will (hopefully) be able to use the exact kernel (custom) and userland versions of FreeBSD that I have on the physical machine, rather than reinstalling from scratch and copying over the data. Basically, I want the virtual system to be as identical as possible to the physical machine, except on UFS instead of ZFS. Only the root filesystem, so I can boot it and tinker virtually before deploying. Sounds reasonable, no? Except I'm running into a few problems I don't understand:
To create the UFS image for the ZFS root, I first looked at
df -h /
and it was about 10GB, so I used about this size. However, it quickly filled up. I learned from this post: Discrepancy in expected vs. actual capacity of ZFS filesystem that " df
could return a value smaller or larger than zfs list
since it does not understand how to interpret compression, snapshots, etc.", so I examined the zfs list
size and it was a few gigabytes larger, about 15 GB. Not too bad, but...When I actually copy the data using the commands above, it is much larger. (It also copies very slowly, presumably because I'm storing the image on the physical disks I'm copying from so it has to read and write back, through the disk image layer). At this point, it is still copying but is 24 GB and growing. Can this discrepancy be attributed to additional "slack" in the UFS filesystem versus ZFS? Is it possible to get a better estimate of how big of an image I should allocate? Or am I copying the data incorrectly somehow?
Secondly, piping tar with the -p flag preserves permissions, but it seems to choke on sockets and other special files:
tar: var/run/devd.seqpacket.pipe: tar format cannot archive socket
Is there a more accurate tool to copy files from ZFS to UFS? My first choice would be dump(8)/restore(8), but of course that is only for UFS.
cp
similarly refuses to copy sockets ("cp: /var/run/devd.seqpacket.pipe is a socket (not copied)."). Fortunately with devfs I don't have to worry about copying /dev device nodes, but there are about 80 socket files that are not archived. Is this a problem, or can I just let the app that uses them recreate them? Looks like I can manually create a socket with nc -lkU /tmp/sock
, but this won't copy the ownership/permissions. Are there other filesystem nodes that would be backed up by the UFS dump/restore tools that wouldn't be by tar/cp?Finally, does it make sense what I'm trying to do? I searched around for "convert ZFS to UFS" but all the results seem to be about people going in the opposite direction, from UFS to ZFS. Thanks in advance for any advice and suggestions, and I hope this is the appropriate section.