Solved question about zfs snapshot in freebsd14.2 ?

Dear all :
1. i have run below command as root
zfs snapshot zroot/ROOT/default@2025xxx
i think this command only backup my freebsd14.2 system. not any other content right ? thanks.
 
Yes, and no... it doesn't actually back anything up, it merely freezes the state of your filesystem and stores any upcoming changes. Try: zfs list -rt all zroot/ROOT and you'll see what I mean.

The reason why I don't consider this a 'real' backup is because it doesn't provide full on data protection. If the file system got damaged somehow then you'd lose access to both the file system itself and the snapshot(s); so essentially losing access to all your data.

That's not to say it doesn't provide any protection at all of course... if files get (accidentally) deleted then you'd always be able to retrieve those from the snapshot (see the /zroot/ROOT/default/.zfs folder).

Hope this can clear stuff up a bit.
 
Yes, and no... it doesn't actually back anything up, it merely freezes the state of your filesystem and stores any upcoming changes. Try: zfs list -rt all zroot/ROOT and you'll see what I mean.

The reason why I don't consider this a 'real' backup is because it doesn't provide full on data protection. If the file system got damaged somehow then you'd lose access to both the file system itself and the snapshot(s); so essentially losing access to all your data.

That's not to say it doesn't provide any protection at all of course... if files get (accidentally) deleted then you'd always be able to retrieve those from the snapshot (see the /zroot/ROOT/default/.zfs folder).

Hope this can clear stuff up a bit.
Dear shelLuser:
thanks for your help . i am clear a bit about zfs snapshot. could tell me how to full backup my system to some where ? what is the best way . thanks.
 
No idea if this is the best way, but I use sysutils/zfsnap and sysutils/zxfer as described by SixFeetUp, sending backups to another FreeBSD system. I keep the 2nd system off-line except when I'm doing the zxfer, so that bit is done by hand rather than a cron job.
Dear putney :
snapshot just only keep a part of file . if your system crash. the snapshot will not work. you can't find any data . right ?
 
I'm not sure I understand.

A ZFS snapshot preserves the whole dataset - all of its files, all of each of those files. In turn, zfsnap snapshots all the system's datasets, so even if a h/w failure entirely destroyed the system, the latest snapshots that you will have sent to the 2nd machine will have everything from the state of the system at the time those snapshots were taken. If you don't have a 2nd machine, you can always do the zxfer to USB disk(s).
 
could tell me how to full backup my system to some where ? what is the best way . thanks.
"Best" really is in the eye of the beholder so to speak; it depends on your needs and resources.

Generally speaking a backup should always try to get data "off site"; so away from the server. A good way to do this is using zfs-send(8) (and receive): this can send a whole ZFS filesystem (or snapshot!) into a datastream which you could then store somewhere else. The only caveat is that you can only do a full restoration, so nothing on a per-file basis (of course you could restore the data into another location, then grab individual files from there).
 
I'm not sure I understand.

A ZFS snapshot preserves the whole dataset - all of its files, all of each of those files. In turn, zfsnap snapshots all the system's datasets, so even if a h/w failure entirely destroyed the system, the latest snapshots that you will have sent to the 2nd machine will have everything from the state of the system at the time those snapshots were taken. If you don't have a 2nd machine, you can always do the zxfer to USB disk(s).
Dear putney :
thanks for your reply. could tell me the snapshot just like copy ? when i do a snapshot to some datasets, just like copy all datasets data , right ? thanks.
 
"Best" really is in the eye of the beholder so to speak; it depends on your needs and resources.

Generally speaking a backup should always try to get data "off site"; so away from the server. A good way to do this is using zfs-send(8) (and receive): this can send a whole ZFS filesystem (or snapshot!) into a datastream which you could then store somewhere else. The only caveat is that you can only do a full restoration, so nothing on a per-file basis (of course you could restore the data into another location, then grab individual files from there).
Dear shelLuser:
thanks for your help. when i used zfs send with ssh . meeting a problem. the ssh can't login with root. so zfs receive can't finish that step . how do you do when that stage ?
1. a computer use a normal user ssh b computer.
2. at a computer ,zfs send zroot/ROOT/default@2024y6m19d | ssh normaluser@192.168.122.1 \
zfs receive systempool/backup2

this command will be failure. if i don't want to let root user can use ssh login in b computer. how to send snapshot to b computer from a ? thanks.
 
The user with the allowed ZFS administration permissions to ssh in is not specified. Example: ssh fff2024g@192.168.200.3
Dear t-daemon :
i have find the reason. we need to below
2. on sender machine :
#zfs allow -u someuser send,snapshot,hold systempool/xxx/xxx

the sender must have hold permission , then we will be ok . . but See FreeBSD handbook, chapter 22.4.7.2. Sending Encrypted Backups over SSH , "On the receiving system:" this guide don't show us. that is a bug . thanks.
 
Dear all:
i have finished zfs send file through ssh and zfs recv.........but one question : why i can't see any progress staus ?
% zfs snapshot <span>-r</span> mypool/home@monday<br>
% zfs send <span>-R</span> mypool/home@monday | ssh someuser@backuphost zfs recv <span>-dvu</span> recvpool/backup

Using -v shows more details about the transfer, including the elapsed time and the amount of data transferred.
i have used -v , but nothing show . how to show the progress status ? some one show me use pv tools. how about you ? thanks.
 
You can get a progress report showing percentage completion. I have not tried cstream(1), but I got the example of using sysutils/pv from FreeBSD installation scripts. Here is an edited excerpt of how I create my offsite backups (and monitor percentage of completion):
Code:
TAB=$(echo | tr '\n' '\t')
zfs snapshot -r tank@replica
size=$(zfs send -nP -R tank@replica | grep "^size" | sed -e "s/size[ $TAB]*//")
zfs send -R tank@replica | pv -s $size -ptebarT | ... | zfs receive -Fdu ...
Edit: I see that there is a name space clash with pv(1) and sysutils/pv. I'm referring to the latter, which installs from the package pv-1.9.31.
 
Dear all:
new question come .........
when i zfs send snapshot to remote machine with command " zfs send -vR systempool/usr/src@2025y4m14d | ssh normaluser@192.168.122.90 zfs recv -Fdu zroot/backup2025". that's job finished.
1. my systempool/usr/src@2025y4m14d was 0B ,refer size is 853Mb. what is real size about my snapshot ?
zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
systempool/usr/src@2025y4m14d 0B - 853M -

2. when i transfer this snapshot to remote machine. that display i need to transfer 1.4GB to remote machine. what cause i need to transfer 1.4gb ? what content add it to this transfer process ?
zfs send -vR systempool/usr/src@2025y4m14d | ssh zhou@192.168.122.90 zfs recv -Fdu zroot/backup2025
full send of systempool/usr/src@2025y4m14d estimated size is 1.40G
total estimated size is 1.40G
TIME SENT SNAPSHOT systempool/usr/src@2025y4m14d
(zhou@192.168.122.90) Password for zhou@zfssend:11:57:59 64.6K systempool/usr/src@2025y4m14d

11:58:00 144M systempool/usr/src@2025y4m14d
11:58:01 308M systempool/usr/src@2025y4m14d
11:58:02 447M systempool/usr/src@2025y4m14d
11:58:03 576M systempool/usr/src@2025y4m14d
11:58:04 753M systempool/usr/src@2025y4m14d
11:58:05 892M systempool/usr/src@2025y4m14d
11:58:06 1.04G systempool/usr/src@2025y4m14d
11:58:07 1.17G systempool/usr/src@2025y4m14d
11:58:08 1.33G systempool/usr/src@2025y4m14d
11:58:09 1.44G systempool/usr/src@2025y4m14d
cannot receive snapdir property on zroot/backup2025/usr/src: permission denied ###note : what happen to us ? please let me know

3.In the remote machine . i find below information
NAME USED AVAIL REFER MOUNTPOINT
zroot/backup2025 2.00G 4.49G 384K /zroot/backup2025
zroot/backup2025/usr 2.00G 4.49G 384K /zroot/backup2025/usr
zroot/backup2025/usr/src 2.00G 4.49G 2.00G /zroot/backup2025/usr/src

looks like the snapshot has been mounted.but , but why the snapshot used 2.0 GB ?
when i use cd /zroot/backup2025/usr/.zfs/snapshot/
root@zfssend:/zroot/backup2025/usr/.zfs/snapshot # zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
zroot/backup2025/usr/src@2025y4m14d 0B - 2.00G -
root@zfssend:/zroot/backup2025/usr/.zfs/snapshot # ls
root@zfssend:/zroot/backup2025/usr/.zfs/snapshot #

, nothing i can find it . please let me know where is my file . thanks. have a good day .
 
1. my systempool/usr/src@2025y4m14d was 0B ,refer size is 853Mb. what is real size about my snapshot ?
zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
systempool/usr/src@2025y4m14d 0B - 853M -
If this is a default menu guided Root-on-ZFS installed system, then the file system has the "compression=lz4" dataset property enabled: zfs get compress (see zfsprops(7)).

To display compressed and apparent uncompressed size of the snapshot:

compressed
Code:
# du -sh /usr/src/.zfs/snapshot/2025y4m14d

uncompressed
Rich (BB code):
# du -shA /usr/src/.zfs/snapshot/2025y4m14d

Example:
Rich (BB code):
% du -sh /usr/src-main/.zfs/snapshot/250413
2.7G    /usr/src-main/.zfs/snapshot/250413

% du -shA /usr/src-main/.zfs/snapshot/250413
3.3G    /usr/src-main/.zfs/snapshot/250413



2. when i transfer this snapshot to remote machine. that display i need to transfer 1.4GB to remote machine. what cause i need to transfer 1.4gb ? what content add it to this transfer process ?
The value of 1.4GB results from a data stream send decompressed from a compressed file system. This is the default behavior when not overridden.

zfs-send(8)
Rich (BB code):
       -c, --compressed

           that feature enabled as well.  If the large_blocks feature is
           enabled on the sending system but the -L option is not supplied in
           conjunction with -c, then the data will be decompressed before
           sending so it can be split into smaller block sizes.  Streams sent
           with -c will not have their data recompressed on the receiver side
           using -o compress= value.  The data will stay compressed as it was
           from the sender.  The new compression property will be set for
Rich (BB code):
% zpool get feature@large_blocks
NAME        PROPERTY              VALUE                 SOURCE
zroot       feature@large_blocks  enabled               local

Use the -c option to send compressed data stream.


cannot receive snapdir property on zroot/backup2025/usr/src: permission denied ###note : what happen to us ? please let me know
I suspect the "snapdir" zfsprops(7) on the sending system can't be received because the user on the receiving system with the zfs-allow(8) administration permissions to receive the data stream on zroot/backup2025 dataset doesn't have the "snapdir" permission.



3.In the remote machine . i find below information
NAME USED AVAIL REFER MOUNTPOINT
zroot/backup2025 2.00G 4.49G 384K /zroot/backup2025
zroot/backup2025/usr 2.00G 4.49G 384K /zroot/backup2025/usr
zroot/backup2025/usr/src 2.00G 4.49G 2.00G /zroot/backup2025/usr/src

looks like the snapshot has been mounted.but , but why the snapshot used 2.0 GB ?
What does zfs list systempool/usr/src on the sending system show?
 
Back
Top