Solved bectl commands fail

There are a couple of reasons why libbe_init(const char*) may fail. You can see (or guess) them in /usr/src/lib/libbe/be.c. That said, I don't think this will really help.

What is the result of zpool status?
Also, install and try the vermaden utility sysutils/beadm and see what beadm list gives as output.
 
I never knew that beadm was vermaden's. Hopefully, this mention of him gets his attention so he sees my "Thank you for beadm". I've had it work a few times when bectl didn't work. Rather than try to troubleshoot, I just installed and used, without problem beadm.

There's a nice tutorial on using beadm by Mike Lucas. Dated, but still quite useful

 
thanks for all the suggestions. I still get the same error after install beadm. When i try beadmn I get an interesting message:

Code:
# bectl  list
libbe_init("") failed.

[255]# beadm  list
ERROR: This system is not configured for boot environments

[1]# zpool status
  pool: root
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        root        ONLINE       0     0     0
          nda0p4    ONLINE       0     0     0

errors: No known data errors

# freebsd-version -kru
14.1-RELEASE-p5
14.1-RELEASE-p5
14.1-RELEASE-p6

# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
root  14.1G  44.0G  14.1G  none

# mount
root on / (zfs, local, nfsv4acls)
devfs on /dev (devfs)

I am running zfs what do i need to do to make it configured for boot environments ?
 
It looks like something has gone wrong with your ZFS installation.

How did you install your root-on-ZFS?
Please post the output of zfs list -o name,canmount,mountpoint

There should be at least a line like:
Code:
	     NAME			     CANMOUNT	     MOUNTPOINT
	     zroot
[...]
	     zroot/ROOT/default		     noauto	     none
Mountpoint in this line may show / instead of none
This will result in:
Code:
# bectl list
BE 		Active 	Mountpoint 	Space 	Created
default 	R 		/
where the default BE resides in the data set zroot/ROOT/default, mounted on /
 
I used the installer and didn't do anything special.

Code:
# zfs list -o name,canmount,mountpoint
NAME  CANMOUNT  MOUNTPOINT
root  on        none
 
Code:
# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
root  14.1G  44.0G  14.1G  none
# mount
root on / (zfs, local, nfsv4acls)
devfs on /dev (devfs)


Check this: https://is.gd/BECTL

This is the default ZFS setup with Auto (ZFS) option from FreeBSD installer:

vermaden_2024-11-18_14-54-27.png
 
as root "zpool history root"
I don't think there is a problem with the pool name "root" but I've never seen that from the installer, I've always seen "zroot" as the pool name.
 
The same error happened to me on a system that was not previously installed with boot environments in mind, which I tried to convert to boot environments. I don't remember exactly what I did, but I reorganized my ZFS datasets to match a boot environment-enabled pool layout.

I think what I remember is that the root filesystem for the boot environment root cannot be the ZFS pool root, i.e. your root partition would cause this error so you'd need to create a root/ROOT partition.
 
thanks for all the replies. Do I have any options besides a reinstall ? any option to copy files or zfs send receive instead ?

btw I used the installer but not didn't use auto partitions since this is a laptop which is dual booting windows.
 
thanks for all the replies. Do I have any options besides a reinstall ? any option to copy files or zfs send receive instead ?

btw I used the installer but not didn't use auto partitions since this is a laptop which is dual booting windows.

After making a backup ...

Do this works? (probably not but worth a try ...)

Code:
# zfs create root/ROOT
# zfs rename -u root root/ROOT/default

Other way to try:

Code:
# zfs snapshot root@default
# zfs create root/ROOT
# zfs send root@default | zfs recv root/ROOT/default
 
I'm a new freeBSD user trying commands listed here - https://klarasystems.com/articles/managing-boot-environments/

I get an error when trying any bectl] command

Code:
# bectl list
libbe_init("") failed.

I am using ZFS. Any suggestions on troubleshooting ?
The reason for this is your system is a virgin FreeBSD install, i.e. it has not been updated by freebsd-update yet. I get this same error on my 15-CURRENT systems that I maintain using buildworld/kernel, i.e. there are no boot environments. Run freebsd-update fetch and follow the instructions. Reboot when it tells you. Then run bectl list again.
 
After making a backup ...

Do this works? (probably not but worth a try ...)

Code:
# zfs create root/ROOT
# zfs rename -u root root/ROOT/default

Other way to try:

Code:
# zfs snapshot root@default
# zfs create root/ROOT
# zfs send root@default | zfs recv root/ROOT/default
Thank you. this is what I was looking for. I'll try this on the weekend so I can reinstall in case things go south.
 
I get this same error on my 15-CURRENT systems that I maintain using buildworld/kernel, i.e. there are no boot environments.
If I understand correctly, as a newbie: if there is a ZFS, then there may not be a BE? I thought that if there is a ZFS, then there MUST be a BE. Don't ZFS + BE always exist together and work too?
 
If I understand correctly, as a newbie: if there is a ZFS, then there may not be a BE? I thought that if there is a ZFS, then there MUST be a BE. Don't ZFS + BE always exist together and work too?
Actually. There are some requirements in order to use the BEs. They can vary depending on the tool, but you must have a minimum dataset organization, see the posts of vermaden.

If you make a default installation, you will meet these requirements. But, if you organize your pool by hand, without care, it's likely you won't have the possibility to use the BEs.
 
I fixed it by taking a snapshot, sending the snapshot to another disk, destroying and recreating the pool correctly, receiving the snapshot and then rollback to the snapshot. thanks everyone for your help and suggestions.
 
Just to be sure: can you post your output of
zfs list -o name,used,referenced,canmount,mountpoint

Any files in the filesystem mounted as / (this should be the filesystem zroot/ROOT/default) are treated as part of the OS and therefore part of a BE.
 
Just to be sure: can you post your output of
zfs list -o name,used,referenced,canmount,mountpoint

Any files in the filesystem mounted as / (this should be the filesystem zroot/ROOT/default) are treated as part of the OS and therefore part of a BE.

are you taking about separating out the home directory ?
Code:
 zfs list -o name,used,referenced,canmount,mountpoint
NAME                USED  REFER  CANMOUNT  MOUNTPOINT
root               15.2G    96K  on        none
root/ROOT          15.2G    96K  on        none
root/ROOT/default  15.2G  14.9G  on        /
 
is now
Code:
zfs list -o name,used,referenced,canmount,mountpoint
NAME                USED  REFER  CANMOUNT  MOUNTPOINT
root               21.7G    96K  on        none
root/ROOT          15.3G    96K  on        none
root/ROOT/default  15.3G  14.9G  on        /
root/home          6.37G  6.37G  on        /home

Just to be sure: can you post your output of
zfs list -o name,used,referenced,canmount,mountpoint

Any files in the filesystem mounted as / (this should be the filesystem zroot/ROOT/default) are treated as part of the OS and therefore part of a BE.
 
are you taking about separating out the home directory ?
Code:
 zfs list -o name,used,referenced,canmount,mountpoint
NAME                USED  REFER  CANMOUNT  MOUNTPOINT
root               15.2G    96K  on        none
root/ROOT          15.2G    96K  on        none
root/ROOT/default  15.2G  14.9G  on        /
This structure is not (by a long shot) how bsdinstall(8) as part of the default auto ZFS-on-root installation would have installed it. More importantly, this is not how BEs are intended to be structured and, almost certainly not how you'd like the ZFS dataset structures to be implemented.

Taking the structure from the screenshot of vermaden's slide, after zfs list -o name,canmount,mountpoint, for example:
Code:
NAME					     CANMOUNT	     MOUNTPOINT
zroot
   [...]
zroot/ROOT/default		     noauto		     none
zroot/usr				     off		     /usr
zroot/usr/home			     on			     /usr/home
This kind of structure results in having your files in /usr/home reside in the dataset zroot/usr/home and thereby outside of any BE. This means that when switching from one BE to another, the home directories of users, including your own, stay unchanged: they fall outside of what is considered to be important to a bootable OS (read: the BEs).

If, on the other hand, all files reside in the dataset zroot/ROOT/default, then, creating a new BE will include the user home directories (and your home directory too of course). Any changes in your home directory for example, in an active/running BE_X are part of BE_X: booting BE_Y, not containing those changes, will therefore not display these changes in your home directory. As a small exercise, in your current set up where you've mapped zroot/ROOT/default fully to /, I suggest you create an extra boot environment and boot into that BE (for example with the boot once feature), make a small change in a file in your own directory. Then boot back into your original BE and verify the status of your small change.

The same happens to more ZFS datasets, for example to zroot/var and zroot/var/log; the former, again, is not mounted (but has a mountpoint) and the latter is mounted to /var/log, thereby putting it outside of any BE, thus its contents are not affected by any change of BE—as one would like. Also keep in mind: the more you let data reside in a seperate dataset (like zroot/var/log) the less it will contribute to each and every BE. Each newly created BE has a very small overhead size-wise, initially. With the passing of time, BEs get bigger and bigger; this can total up to a sizeable amount; apart from the clutter from a management point of view. When you put everything in the zroot/ROOT/default dataset, sizes of BEs increase much faster.



Usually, new users follow a path something like:
  1. Choose the default auto-ZFS-on-root installation by the installer.
    - You get the ZFS file systems (=dataset) structure and its mappings created for you!
  2. Discover and learn about BEs; experiment with them.
  3. Use them in practical situations.
  4. Understand the meaning of the structure of ZFS datasets, their flexibility and other options.
  5. Discover and experiment with more advanced applications of BEs.
    - For example "pre-installing" a complete upgrade of FreeBSD without any reboot. Pick the time of a reboot and thereby actuating the upgrade with one single reboot of the system.
You have chosen the reverse (apart from the last item I suppose), and thereby, you could say that you have thrown yourself into the deep end. As can be viewed in Thread how-to-manually-setup-freebsd-on-zfs-root.94626, creating a ZFS-on-root structure manually isn't trivial.

In my list at BEs & snapshots is Allan Jude's presentation of ZFS Powered Magic Upgrades. Below the list there, I've hinted at some aspects of the flexibility of BEs and their structures. Allan Jude explains the existence of extra ZFS datasets and their mappings and why for example the filesystem (=ZFS dataset) zroot/usr is not mounted, but has mountpoint /usr for inheritance purposes; like shown here:
Code:
NAME					     CANMOUNT	     MOUNTPOINT
zroot
   [...]
zroot/usr				     off		     /usr
zroot/usr/home			     on			     /usr/home
and why zroot/usr/home is mounted. Alternatively, you can download the complete presentation from FOSDEM (and not be distracted by YouTube commercials):
I would say the presentation as a whole is an advanced presentation as to what potential BEs have. However, Allan's explanation of how these ZFS dataset structures are set up, how they work and what options you have, is the best I've seen. This particular explanation is from ca 5:00 min to about 8:46 where he continues with the slide "Going Further". The ZFS file system structure is in view at 6:43. I do suggest however that you also listen carefully to the part leading up to it.

Unfortunately, you now have seen three different ZFS file systems structures:
  1. the structure as displayed by vermaden's slide earlier in this thread
  2. the structure as displayed by Allan Jude in his presentation
  3. the structure as displayed by bectl(8) - (actually two structures)
#1 and #2 are structures how bsdinstall once created them. To the best of my knowledge, none of these three match the structure created by the current version of bdsinstall(8). bectl(8) even describes an alternative structure of so called "deep boot environments". All this can be rather confusing as BEs are very flexible.

If you want to create the structure as bsdinstall from your current FreeBSD release would have created it, I suggest you let bsdinstall work its magic on a separate disk with the default auto-ZFS-on-root. Then you can study (make use of zfs list -o name,used,referenced,canmount,mountpoint) and recreate the structure onto your intended partition. Installing FreeBSD Root on ZFS using GPT may be helpful.

Perhaps you can reinstall your release version onto your (dual booted) internal disk-partition, however, I do not have any experience with that.
 
Back
Top