Solved How to mount a zfs partition?

Before anyone jumps on me I did read Thread how-to-mount-a-zfs-partition.61112 but it did not shed any light on my situation...

I have a disk with a single ZFS partition which was in a system which ran pfSense. Now I've attached it to a system running FreeBSD 11.1 and I can't work out how to mount it. How do I find out what zpool exists on the disk?
 
zpool import. Most cases, this will immediately mount it. If not, follow it with zfs mount. Read the man pages for those commands.
 
Neither command does anything... Presumably I need a pool name, but don't have one. I just have a disk with a 1TB freebsd-zfs partition and can't figure out how to mount it.
 
Well, I can't be bothered to read up on that other thread and ZFS happens to be my favorite filesystem. So here goes...

Just using zpool import (without any arguments) will make the system check the currently attached storage media and if it finds a valid ZFS signature then the pool name will be listed. This command does not automatically import and mount filesystems, all it does is detect them.

After you got a name you can then proceed to the actual import process. Let's assume our pool is called zroot (a mostly common name). There are 2 important things to keep in mind: a pool always contains a filesystem. In other words: the pool is also a filesystem on its own. Because of that it'll need a mount point. Second: ZFS is an intelligent filesystem, it keeps track of its history. Therefor you may need to force the import because it will otherwise detect a different environment.

Because /mnt is a commonly used mountpoint this leads us to: # zpool import -fR /mnt zroot.

After that you should be able to access your filesystem(s) in /mnt. However... there is more to this story:
Code:
peter@zefiris:/home/peter $ zfs get mountpoint,canmount zroot
NAME   PROPERTY    VALUE       SOURCE
zroot  mountpoint  /           local
zroot  canmount    on          default
A normal filesystem doesn't know where it should be mounted in an hierarchy, this is only determined by /etc/fstab (or the individual mount command). ZFS on the other hand doesn't use /etc/fstab at all and instead its filesystem(s) keep track of the proper place on a per-filesystem basis. Through the properties listed above. Which is the main reason why we used -R in the zpool import command: to make sure the system realized that its filesystem(s) should be accessible using a different path than their normal one.

Next stop, very important when you have a ZFS pool automatically set up by the installer, is the canmount property. This is set to off by the FreeBSD installer for your root filesystems, and as a result they will not automatically mount after you imported your ZFS pool. The reason for doing that is to cater to sysutils/beadm, a decision which I personally consider ridiculous and only showcasing a bad design. But that's offtopic here.

But as a result it is possible that you won't be able to access your filesystem(s) after successfully importing a pool. You can check that they're still available by using: zfs list, this should list all the available filesystems in the currently imported pool(s).

If you need to mount those "hidden" filesystems just use: # zfs mount zroot (for example). Or to give a proper example of a default setup: # zfs mount zroot/root/DEFAULT. Just list the available filesystems and you'll soon see what you should use.

You don't have to worry with specifying a mount point or anything because all filesystems will be mounted under the virtual root (which we set to /mnt in the example above).

And that's how you mount a ZFS filesystem.

It is noteworthy that you can also use the normal mount command, but I would recommend against this. Simply because you'd need to know the ZFS filesystem name before you can do this, meaning that you'd be using the zfs command anyway, seems pointless to suddenly switch to mount.

Hope this can help.
 
Well, I can't be bothered to read up on that other thread and ZFS happens to be my favorite filesystem. So here goes...

Just using zpool import (without any arguments) will make the system check the currently attached storage media and if it finds a valid ZFS signature then the pool name will be listed. This command does not automatically import and mount filesystems, all it does is detect them.

And that's how you mount a ZFS filesystem.

Hope this can help.

Unfortunately not...
and if it finds a valid ZFS signature then the pool name will be listed.
and what if it doesn't?

I guess I'll put it back in my pfSense box and see if is accessible there. It's been there for about three years and I can't remember what it was used for...
 
and what if it doesn't?
Then my conclusion would be that it doesn't contain any ZFS pools, or that the pool got corrupted somehow.

What does # file -s /dev/<device> tell you?

Example:
Code:
root@zefiris:/home/peter # gpart show ada0
=>       40  312450656  ada0  GPT  (149G)
         40        256     1  freebsd-boot  (128K)
        296  312450400     2  freebsd-zfs  (149G)

root@zefiris:/home/peter # file -s /dev/ada0p2
/dev/ada0p2: data
Although file doesn't recognize the ZFS filesystem it does recognize others, and this also tells us that something is on there. Just because your partition type is of freebsd-zfs doesn't imply that it also contains a ZFS pool. It should, but I could just as easily try and run newfs on it.

So I'd also try this trick to see what it says.

(edit)

pfSense? Then my other theory is that the filesystem could also be encrypted. That would explain why no valid ZFS pools are found, and it would also explain why you might get the same result as mine above: file mentioning data.
 
Then my conclusion would be that it doesn't contain any ZFS pools, or that the pool got corrupted somehow.

What does # file -s /dev/<device> tell you?

/dev/da0p1: Unix Fast File system [v2] (little-endian) last mounted on /mnt/data, last written at Mon Mar 20 09:00:40 2017, clean flag 1, readonly flag 0, number of blocks 244190636, number of data blocks 236521838, number of cylinder groups 1524, block size 32768, fragment size 4096, average file size 16384, average number of files in dir 64, pending blocks to free 0, pending inodes to free 0, system-wide uuid 0, minimum percentage of free blocks 8, TIME optimization
 
So I'm trying to rescue my freebsd machine where I caused it to kernal panic on boot by adding an nvidia line to a config file in ETC.

I was able to follow ShelLuser's post and # zpool import -fR /mnt zroot only my /mnt point on my usb drive is /PCRootDrive. (an memstick img selecting singe user mode in the boot menu)

However when I look in /PCRootDrive I only see
home
tmp
usr
var
zroot

and the zroot directory looks empty.

Is there any way to get to my /etc directory to fix my configuration. Yes I'm learning the hard way to do environments
 
So I'm trying to rescue my freebsd machine where I caused it to kernal panic on boot by adding an nvidia line to a config file in ETC.

I was able to follow ShelLuser's post and # zpool import -fR /mnt zroot only my /mnt point on my usb drive is /PCRootDrive. (an memstick img selecting singe user mode in the boot menu)

However when I look in /PCRootDrive I only see
home
tmp
usr
var
zroot

and the zroot directory looks empty.

Is there any way to get to my /etc directory to fix my configuration. Yes I'm learning the hard way to do environments
Hi! I caught this bug too! Use "zfs list" after "zpool import" and "zfs mount zroot/root/DEFAULT"
Code:
root@chu25:~ # zfs list
NAME                                            USED  AVAIL  REFER  MOUNTPOINT
ipub                                            408K   899G    96K  /ipub
pub                                            1.44T   325G  1.44T  /pub
zroot                                          20.0G   193G    96K  /zroot
zroot/ROOT                                     20.0G   193G    96K  none
zroot/ROOT/default                             20.0G   193G  20.0G  /
zroot/home                                       96K   193G    96K  /home
zroot/tmp                                       120K   193G   120K  /tmp
zroot/usr                                       288K   193G    96K  /usr
zroot/usr/ports                                  96K   193G    96K  /usr/ports
zroot/usr/src                                    96K   193G    96K  /usr/src
zroot/var                                       840K   193G    96K  /var
zroot/var/audit                                  96K   193G    96K  /var/audit
zroot/var/crash                                  96K   193G    96K  /var/crash
zroot/var/log                                   304K   193G   304K  /var/log
zroot/var/mail                                  152K   193G   152K  /var/mail
zroot/var/tmp                                    96K   193G    96K  /var/tmp
zroot2                                         39.5G   172G    96K  /mnt/zroot/zroot
zroot2/ROOT                                    37.1G   172G    96K  none
zroot2/ROOT/13.1-RELEASE_2023-10-30_012759        8K   172G  8.60G  /mnt/zroot
zroot2/ROOT/13.2-RELEASE-p4_2023-10-30_013143     8K   172G  8.70G  /mnt/zroot
zroot2/ROOT/13.2-RELEASE-p4_2025-01-11_101509     8K   172G  30.8G  /mnt/zroot
zroot2/ROOT/14.2-RELEASE_2025-01-11_102026        8K   172G  30.8G  /mnt/zroot
zroot2/ROOT/14.2-RELEASE_2025-01-11_121827        8K   172G  30.9G  /mnt/zroot
zroot2/ROOT/14.2-RELEASE_2025-01-12_060136        8K   172G  27.3G  /mnt/zroot
zroot2/ROOT/default                            37.1G   172G  8.47G  /mnt/zroot
zroot2/tmp                                      136K   172G   136K  /mnt/zroot/tmp
zroot2/usr                                     2.15G   172G    96K  /mnt/zroot/usr
zroot2/usr/home                                 377M   172G   377M  /mnt/zroot/usr/home
zroot2/usr/ports                                949M   172G   949M  /mnt/zroot/usr/ports
zroot2/usr/src                                  873M   172G   873M  /mnt/zroot/usr/src
zroot2/var                                      269M   172G    96K  /mnt/zroot/var
zroot2/var/audit                                 96K   172G    96K  /mnt/zroot/var/audit
zroot2/var/crash                                 96K   172G    96K  /mnt/zroot/var/crash
zroot2/var/log                                  133M   172G   133M  /mnt/zroot/var/log
zroot2/var/mail                                 136M   172G   136M  /mnt/zroot/var/mail
zroot2/var/tmp                                   96K   172G    96K  /mnt/zroot/var/tmp
 
Back
Top