bectl clarifications

jbo@

Developer
I have some questions regarding bectl(8) after reading the corresponding manual.

1. The section for bectl create tells us that the -r flag creates a recursive boot environment. I'm not sure whether I understand what this exactly implies. So far I've only used bectl create without any additional flags. Is this analogous to creating a recursive ZFS snapshot? What are usecases for this?

2. The section for bect destroy tells us that the -o flag will destroy the origin as well. What exactly is the origin? I assume that this would be the previous boot environment? That begs these questions:
  • If I have multiple preceding BEs, will the -o flag only delete the "closest" previous BE or all preceding ones?
  • What happens if I have preceeding BEs and I destroy one without the -o flag? Is this analogous to destroying a ZFS snapshot where previous and later snapshots will be unaffected?
In general, before updating a FreeBSD system, I run these commands:
Code:
bectl create 20220312
bectl activate 20220312
shutdown -r now
# perform actual update (eg. freebsd-update or update from source)
shutdown -r now
Is this the way to go? Reboots are of course not always necessary - I'm more asking about the overall process involving the use of bectl(8).
 
( My Opinions, my understanding of zfs, snapshots, clones, etc as a user not developer. )

First:
The commands you are using to upgrade is not a bad way. If your intent is "the BE that I just created is where I want the update" then you need to do the reboot after the activate so you are in the correct BE. The reboot after the actual update is required so you are running the updated BE. I've linked to vermaden stuff about upgrading into a chroot before (I think it is good for going across versions like 12.x to 13.x) but there are a couple issues related to the recording of package updates (ask grahamperrin about that).

Now bectl:
Technically a Boot Environment is a ZFS clone, clones always come from snapshots, so think of clones as "writeable snapshots".
After you do a bectl create, if you do zfs list -t snap you should see a new snapshot that your clone (BE) is created from.
The -o on bectl destroy will delete the BE (clone) and the snapshot that was used to create it.
Since ZFS is Copy On Write, snapshots are merely "block pointers", then clones hold changes. Deleting the clone and the snapshot used simply means that you've deleted the new BE completely. Snapshots can wind up holding data as things are deleted/changed in the datasets.
The beadm command the behavior is "-o" by default.
Doing bectl create and destroy with zfs list -t snap in between can be enlightening.

bectl -r: I am not sure, I've never used it, but since a BE starts with a snapshot, I think your guess on "is it like a recursive snapshot" is correct. I have no idea what the implications are for a Boot Environment.
 
Thank you for your reply - good to know that I'm not completely off here :D

So just to get this straight: bectl destroy -o has the effect that the underlying ZFS clone gets destroyed as well. If one omits the -o flag the BE itself gets destroyed but not the underlying ZFS clone?
In what situation would one want to destroy the BE itself but not the underlying ZFS clone? As I understood, boot environments are just ZFS clones (as you stated yourself). Therefore, I have a hard time thinking of a situation where one wants to destroy the BE itself but not the underlying ZFS clone?!
 
The BE "is the clone" (you are looking for).
without the -o the BE/clone is destroyed meaning any changes are deleted, but the snapshot remains.
Now don't forget that when the snapshot was created, it represents the state of the dataset at that time. If the dataset does not change, the snapshot size is basically minimal. If the dataset changes, the snapshot grows in size. That's the "space used" issue with snapshots.

Now as to your question, I agree. If you created a BE, then realize it's broken or bad and you delete it, you want everything gone. To me that means the BE (clone) and the underlying snapshot. If you look at FreeBSD mailing lists I think there has been a lot of discussion around the correct behavior. It seems a "50/50" on bectl destroy should delete the clone and the snapshot it was created from.

One can always zfs list -t snapshots and then manually delete them later on, but that's extra work you shouldn't have to do.
 
… If one omits the -o flag the BE itself gets destroyed but not the underlying ZFS clone? …

It might help to think of o for origin.

Then, instead of this:
1647104452593.png

– maybe what's suggested at <{link removed ? link provider grahamperrin is dead}>
  • keyword if
 
Last edited:
Snapshots

1647104965308.png


zfs destroy n250511-5f73b3338ee-d@2022-01-14-06:59:29-0 zfs-destroy(8) did not work (and I did not realise why until after I posted, oops).

bectl destroy n250511-5f73b3338ee-d@2022-01-14-06:59:29-0 did work.

Code:
% bectl list -c creation
BE                    Active Mountpoint Space Created
n250511-5f73b3338ee-d -      -          4.94G 2021-11-13 15:43
n252381-75d20a5e386-b -      -          6.80G 2022-01-12 23:23
n252450-5efa7281a79-a -      -          6.49G 2022-01-14 19:27
n252483-c8f8299a230-b -      -          4.84G 2022-01-17 14:24
n252505-cc68614da82-a -      -          4.90G 2022-01-18 14:26
n252531-0ce7909cd0b-h -      -          5.71G 2022-02-06 12:24
n252997-b6724f7004c-c -      -          6.17G 2022-02-11 23:07
n253116-39a36707bd3-e -      -          5.66G 2022-02-20 07:03
n253343-9835900cb95-c -      -          1.54G 2022-02-27 14:58
n253627-25375b1415f-b -      -          3.10M 2022-03-06 04:57
n253627-25375b1415f-c -      -          209M  2022-03-07 03:48
n253627-25375b1415f-d NR     /          148G  2022-03-10 10:04
% bectl list -s -c creation
BE/Dataset/Snapshot                                         Active Mountpoint Space Created

n250511-5f73b3338ee-d
  august/ROOT/n250511-5f73b3338ee-d                         -      -          9.21M 2021-11-13 15:43
    august/ROOT/n253627-25375b1415f-d@2021-11-14-00:24:29-0 -      -          4.93G 2021-11-14 00:24
  n250511-5f73b3338ee-d@2022-01-14-06:59:29-0               -      -          3.28M 2022-01-14 06:59

n252381-75d20a5e386-b
  august/ROOT/n252381-75d20a5e386-b                         -      -          1.05M 2022-01-12 23:23
    august/ROOT/n253627-25375b1415f-d@2022-01-14-19:27:21-0 -      -          6.80G 2022-01-14 19:27

n252450-5efa7281a79-a
  august/ROOT/n252450-5efa7281a79-a                         -      -          3.27M 2022-01-14 19:27
    august/ROOT/n253627-25375b1415f-d@2022-01-17-04:49:36-0 -      -          6.49G 2022-01-17 04:49

n252483-c8f8299a230-b
  august/ROOT/n252483-c8f8299a230-b                         -      -          3.64M 2022-01-17 14:24
    august/ROOT/n253627-25375b1415f-d@2022-01-18-14:26:02-0 -      -          4.83G 2022-01-18 14:26

n252505-cc68614da82-a
  august/ROOT/n252505-cc68614da82-a                         -      -          3.20M 2022-01-18 14:26
    august/ROOT/n253627-25375b1415f-d@2022-01-19-16:22:12-0 -      -          4.90G 2022-01-19 16:22

n252531-0ce7909cd0b-h
  august/ROOT/n252531-0ce7909cd0b-h                         -      -          1.90M 2022-02-06 12:24
    august/ROOT/n253627-25375b1415f-d@2022-02-07-11:25:41-0 -      -          5.71G 2022-02-07 11:25

n252997-b6724f7004c-c
  august/ROOT/n252997-b6724f7004c-c                         -      -          1.87M 2022-02-11 23:07
    august/ROOT/n253627-25375b1415f-d@2022-02-12-17:19:08-0 -      -          6.17G 2022-02-12 17:19

n253116-39a36707bd3-e
  august/ROOT/n253116-39a36707bd3-e                         -      -          4.83M 2022-02-20 07:03
    august/ROOT/n253627-25375b1415f-d@2022-02-23-00:42:44-0 -      -          5.65G 2022-02-23 00:42

n253343-9835900cb95-c
  august/ROOT/n253343-9835900cb95-c                         -      -          2.89M 2022-02-27 14:58
    august/ROOT/n253627-25375b1415f-d@2022-03-05-15:47:28-0 -      -          1.54G 2022-03-05 15:47

n253627-25375b1415f-b
  august/ROOT/n253627-25375b1415f-b                         -      -          3.10M 2022-03-06 04:57
    august/ROOT/n253627-25375b1415f-d@2022-03-07-03:48:49-0 -      -          0     2022-03-07 03:48

n253627-25375b1415f-c
  august/ROOT/n253627-25375b1415f-c                         -      -          3.14M 2022-03-07 03:48
    august/ROOT/n253627-25375b1415f-d@2022-03-10-10:04:31-0 -      -          206M  2022-03-10 10:04

n253627-25375b1415f-d
  august/ROOT/n253627-25375b1415f-d                         NR     /          148G  2022-03-10 10:04
  n253627-25375b1415f-d@2021-07-10-04:31:39-0               -      -          13.8G 2021-07-10 04:31
  n253627-25375b1415f-d@2021-11-13-15:43:33-0               -      -          4.94G 2021-11-13 15:43
  n253627-25375b1415f-d@2021-11-14-00:24:29-0               -      -          4.93G 2021-11-14 00:24
  n253627-25375b1415f-d@2022-01-14-19:27:21-0               -      -          6.80G 2022-01-14 19:27
  n253627-25375b1415f-d@2022-01-17-04:49:36-0               -      -          6.49G 2022-01-17 04:49
  n253627-25375b1415f-d@2022-01-18-14:26:02-0               -      -          4.83G 2022-01-18 14:26
  n253627-25375b1415f-d@2022-01-19-16:22:12-0               -      -          4.90G 2022-01-19 16:22
  n253627-25375b1415f-d@2022-02-07-11:25:41-0               -      -          5.71G 2022-02-07 11:25
  n253627-25375b1415f-d@2022-02-12-17:19:08-0               -      -          6.17G 2022-02-12 17:19
  n253627-25375b1415f-d@2022-02-23-00:42:44-0               -      -          5.65G 2022-02-23 00:42
  n253627-25375b1415f-d@2022-03-05-15:47:28-0               -      -          1.54G 2022-03-05 15:47
  n253627-25375b1415f-d@2022-03-07-03:48:38-0               -      -          0     2022-03-07 03:48
  n253627-25375b1415f-d@2022-03-07-03:48:49-0               -      -          0     2022-03-07 03:48
  n253627-25375b1415f-d@2022-03-10-10:04:31-0               -      -          206M  2022-03-10 10:04
% su -
Password:
root@mowa219-gjp4-8570p-freebsd:~ # bectl destroy n250511-5f73b3338ee-d@2022-01-14-06:59:29-0
root@mowa219-gjp4-8570p-freebsd:~ # exit
logout
% bectl list -c creation
BE                    Active Mountpoint Space Created
n250511-5f73b3338ee-d -      -          4.94G 2021-11-13 15:43
n252381-75d20a5e386-b -      -          6.80G 2022-01-12 23:23
n252450-5efa7281a79-a -      -          6.49G 2022-01-14 19:27
n252483-c8f8299a230-b -      -          4.84G 2022-01-17 14:24
n252505-cc68614da82-a -      -          4.90G 2022-01-18 14:26
n252531-0ce7909cd0b-h -      -          5.71G 2022-02-06 12:24
n252997-b6724f7004c-c -      -          6.17G 2022-02-11 23:07
n253116-39a36707bd3-e -      -          5.66G 2022-02-20 07:03
n253343-9835900cb95-c -      -          1.54G 2022-02-27 14:58
n253627-25375b1415f-b -      -          3.10M 2022-03-06 04:57
n253627-25375b1415f-c -      -          209M  2022-03-07 03:48
n253627-25375b1415f-d NR     /          148G  2022-03-10 10:04
% bectl list -s -c creation
BE/Dataset/Snapshot                                         Active Mountpoint Space Created

n250511-5f73b3338ee-d
  august/ROOT/n250511-5f73b3338ee-d                         -      -          5.93M 2021-11-13 15:43
    august/ROOT/n253627-25375b1415f-d@2021-11-14-00:24:29-0 -      -          4.93G 2021-11-14 00:24

n252381-75d20a5e386-b
  august/ROOT/n252381-75d20a5e386-b                         -      -          1.05M 2022-01-12 23:23
    august/ROOT/n253627-25375b1415f-d@2022-01-14-19:27:21-0 -      -          6.80G 2022-01-14 19:27

n252450-5efa7281a79-a
  august/ROOT/n252450-5efa7281a79-a                         -      -          3.27M 2022-01-14 19:27
    august/ROOT/n253627-25375b1415f-d@2022-01-17-04:49:36-0 -      -          6.49G 2022-01-17 04:49

n252483-c8f8299a230-b
  august/ROOT/n252483-c8f8299a230-b                         -      -          3.64M 2022-01-17 14:24
    august/ROOT/n253627-25375b1415f-d@2022-01-18-14:26:02-0 -      -          4.83G 2022-01-18 14:26

n252505-cc68614da82-a
  august/ROOT/n252505-cc68614da82-a                         -      -          3.20M 2022-01-18 14:26
    august/ROOT/n253627-25375b1415f-d@2022-01-19-16:22:12-0 -      -          4.90G 2022-01-19 16:22

n252531-0ce7909cd0b-h
  august/ROOT/n252531-0ce7909cd0b-h                         -      -          1.90M 2022-02-06 12:24
    august/ROOT/n253627-25375b1415f-d@2022-02-07-11:25:41-0 -      -          5.71G 2022-02-07 11:25

n252997-b6724f7004c-c
  august/ROOT/n252997-b6724f7004c-c                         -      -          1.87M 2022-02-11 23:07
    august/ROOT/n253627-25375b1415f-d@2022-02-12-17:19:08-0 -      -          6.17G 2022-02-12 17:19

n253116-39a36707bd3-e
  august/ROOT/n253116-39a36707bd3-e                         -      -          4.83M 2022-02-20 07:03
    august/ROOT/n253627-25375b1415f-d@2022-02-23-00:42:44-0 -      -          5.65G 2022-02-23 00:42

n253343-9835900cb95-c
  august/ROOT/n253343-9835900cb95-c                         -      -          2.89M 2022-02-27 14:58
    august/ROOT/n253627-25375b1415f-d@2022-03-05-15:47:28-0 -      -          1.54G 2022-03-05 15:47

n253627-25375b1415f-b
  august/ROOT/n253627-25375b1415f-b                         -      -          3.10M 2022-03-06 04:57
    august/ROOT/n253627-25375b1415f-d@2022-03-07-03:48:49-0 -      -          0     2022-03-07 03:48

n253627-25375b1415f-c
  august/ROOT/n253627-25375b1415f-c                         -      -          3.14M 2022-03-07 03:48
    august/ROOT/n253627-25375b1415f-d@2022-03-10-10:04:31-0 -      -          206M  2022-03-10 10:04

n253627-25375b1415f-d
  august/ROOT/n253627-25375b1415f-d                         NR     /          148G  2022-03-10 10:04
  n253627-25375b1415f-d@2021-07-10-04:31:39-0               -      -          13.8G 2021-07-10 04:31
  n253627-25375b1415f-d@2021-11-13-15:43:33-0               -      -          4.94G 2021-11-13 15:43
  n253627-25375b1415f-d@2021-11-14-00:24:29-0               -      -          4.93G 2021-11-14 00:24
  n253627-25375b1415f-d@2022-01-14-19:27:21-0               -      -          6.80G 2022-01-14 19:27
  n253627-25375b1415f-d@2022-01-17-04:49:36-0               -      -          6.49G 2022-01-17 04:49
  n253627-25375b1415f-d@2022-01-18-14:26:02-0               -      -          4.83G 2022-01-18 14:26
  n253627-25375b1415f-d@2022-01-19-16:22:12-0               -      -          4.90G 2022-01-19 16:22
  n253627-25375b1415f-d@2022-02-07-11:25:41-0               -      -          5.71G 2022-02-07 11:25
  n253627-25375b1415f-d@2022-02-12-17:19:08-0               -      -          6.17G 2022-02-12 17:19
  n253627-25375b1415f-d@2022-02-23-00:42:44-0               -      -          5.65G 2022-02-23 00:42
  n253627-25375b1415f-d@2022-03-05-15:47:28-0               -      -          1.54G 2022-03-05 15:47
  n253627-25375b1415f-d@2022-03-07-03:48:38-0               -      -          0     2022-03-07 03:48
  n253627-25375b1415f-d@2022-03-07-03:48:49-0               -      -          0     2022-03-07 03:48
  n253627-25375b1415f-d@2022-03-10-10:04:31-0               -      -          206M  2022-03-10 10:04
%
 
Last edited:
… I would not have created this topic ;)

I'm glad that you did, because it sort of helped me to solidify my thoughts, a little.

I still find much of it confusing, partly because of wording in the manual (the words "that" and "if" have quite different meanings) and consequently learning something wrong, taking so long to un-learn the wrongness.

Then I confused myself by copying the string of a boot environment into a zfs-destroy(8) command … now corrected in the post above.
 
I have some questions regarding bectl(8) after reading the corresponding manual.

1. The section for bectl create tells us that the -r flag creates a recursive boot environment. I'm not sure whether I understand what this exactly implies. So far I've only used bectl create without any additional flags. Is this analogous to creating a recursive ZFS snapshot? What are usecases for this?

2. The section for bect destroy tells us that the -o flag will destroy the origin as well. What exactly is the origin? I assume that this would be the previous boot environment? That begs these questions:
  • If I have multiple preceding BEs, will the -o flag only delete the "closest" previous BE or all preceding ones?
  • What happens if I have preceeding BEs and I destroy one without the -o flag? Is this analogous to destroying a ZFS snapshot where previous and later snapshots will be unaffected?
In general, before updating a FreeBSD system, I run these commands:
Code:
bectl create 20220312
bectl activate 20220312
shutdown -r now
# perform actual update (eg. freebsd-update or update from source)
shutdown -r now
Is this the way to go? Reboots are of course not always necessary - I'm more asking about the overall process involving the use of bectl(8).
I don't understand why to reboot.
You are by default on the default BE.
If the update fails, you can rollback by choosing to boot from the backed-up clone (in this case BE 20220312), destroy the BE default and then rename the 20220312 BE to default.

bectl -r: I am not sure, I've never used it, but since a BE starts with a snapshot, I think your guess on "is it like a recursive snapshot" is correct. I have no idea what the implications are for a Boot Environment.
This makes me think about whether it is good to name BEs with dates that by self do not explain anything.
Maybe it would be better to name them like 13-RELEASE, 13-RELEASE-pX etc.
This would make more easy to understand how a tree of BEs with different versions works/is structured.
Say, 12, 13, 14 with all their variants and patchlevels.

Edit: I would absolutely like if the loader would be a bit more comfortable, instead of pressing 8 and cycling through (possibly long list of) the BEs with 2, just display a tree of the BEs so you can move the cursor to the desired BE and boot.
 
… if the loader would be a bit more comfortable, instead of pressing 8 and cycling …

? if I recall correctly, in the past, environments were listed with PC-BSD.

<https://forums.freebsd.org/posts/535243> there's a 2017 shot of a list in the loader menu of OmniOS CE …

… and so on.

… Maybe it would be better to name them like 13-RELEASE, 13-RELEASE-pX etc. …

At any one patch level, any number of upgrades might occur. So you might have environment names such as:
  • 13.0-RELEASE-p7-a
  • 13.0-RELEASE-p7-b
  • 13.0-RELEASE-p7-x
  • 13.0-RELEASE-p7-y
  • 13.0-RELEASE-p7-z



The naming scheme that I chose incorporates part of the version level of the release of the operating system as reported by uname(1).

% uname -v FreeBSD 14.0-CURRENT #5 main-n253627-25375b1415f-dirty: Sat Mar 5 14:21:40 GMT 2022 root@mowa219-gjp4-8570p-freebsd:/usr/obj/usr/src/amd64.amd64/sys/GENERIC-NODEBUG %

I have upgraded software maybe four times since the first use of n253627-25375b1415f:

% bectl list -c creation | grep n253627-25375b1415f n253627-25375b1415f-b - - 3.10M 2022-03-06 04:57 n253627-25375b1415f-c - - 233M 2022-03-07 03:48 n253627-25375b1415f-d - - 204M 2022-03-10 10:04 n253627-25375b1415f-e NR / 149G 2022-03-12 18:20 %
  • n253627-25375b1415f-a was probably destroyed a few days ago
  • now, I'll happily destroy -b and -c
  • eventually, some time after an OS update, I'll probably destroy the penultimate boot environment of the previous version of the OS, so I'm left with just one environment per outdated version of the OS as pictured above.



I don't understand why to reboot. …

For me, the habit was learnt with PC-BSD.

Sometimes, a reboot is required.
 
This makes me think about whether it is good to name BEs with dates that by self do not explain anything.
Maybe it would be better to name them like 13-RELEASE, 13-RELEASE-pX etc.
I get your point. In my case I'm updating from source (tracking stable/13). Therefore, the date is somewhat reasonable although one might argue that the commit hash would be more telling...
 
"bectl rename origBEName newBEName"
Lets you name them however you want. The timestamp is simply the default naming convention, so choose something that makes sense to you and stick with it.
 
Opened yesterday: {link removed}

Summary:

Discuss the standard type of layout, as well as the "deep" BE layout, and some of the properties of both. Point the various -r flags at this new section, to help users understand which they're working with and what the -r flag is actually doing. Note that we may just deprecate the -r flag in future versions, but the flag will be recognized as a NOP at that point.
 
Last edited:
… I think I heard that bectl destroy may wind up automatically doing the "-o" to give the same behavior as beadm destroy. …

I don't recall hearing (or reading) that anywhere.

I do vaguely recall sometimes reading about a future possibility of recursion by default (or words to that effect). These memories may have been confused by a lack of understanding.

The memories were before the manual page mentioned the future possibility of deprecating the various -r flags.

<{link removed}>

… Why would I not WANT the snapshots/clone (origin) deleted at the same time? …

… In what situation would one want to destroy the BE itself but not the underlying ZFS clone? …

If the environment is deep, and includes the home directories of all users, then destruction of snapshots of home directories may be premature.

I imagine other cases, but home directories are (to me) most obvious.



FreeBSD bug {link removed}.

Instead, for now, please see the Boot Environment Structures subsection of the manual page for the utility in FreeBSD 14.0-CURRENT:

<{link removed}>
 
Last edited:
If the environment is deep, and includes the home directories of all users, then destruction of snapshots of home directories may be premature.
A Boot Environment, does not (should not?) include any home directories. To me, it would make no sense to. If BEs did contain home directories then every time you roll back you change the user view of his work, pretty much the definition of what a sys admin does not want to do. Think of all the disk space that would get used if user home directories were included in a BE. How many BEs do you currently have on your system? How often do things in a home directory change on a daily basis?

My statement was strictly for Boot Environments, not snapshots/clones in general.

ETA:
Taking a look a the section of the manpage linked, it shows that /usr/home will not be part of a boot environment.
The example shown under "deep" is something that Michael Lucas talks about in his second ZFS book. Why is it important?
Applications like MySQL or other databases may have the default location of the database under say /usr/local/mysql. In a shallow/traditional BE, that specific directory winds up in a BE so as you roll forward or back a BE, you are also rolling the database. This is a bad thing.
The solution is either move the db or simply create a mountable dataset for the database like zroot/usr/local/mysql. But in order to get the separation you need to create a zroot/usr/local with canmount off. That preserves the BE structures under /usr/local and moves the mysql dataset out of the BE which is a good thing.
 
Back
Top