Am I correct in assuming you partitioned the USB drive and dd'd the image into a partition on the drive which the BIOS/UEFI is supporting booting from instead of writing dd output to the drive directly?
Right now only cronejob exist that running sh script once a day to sound routine “check the new FreeBSD release -> if new release exist, download it in tmp ~> rewrite whole usb pen drive by dd from already downloaded image/archive”.
So, I need to extend procedure (rewrite sh/bash script/Ansible role and setup some toolset for ZFS snapshot writing) to:
- once a day in a certain time checking for new FreeBSD release and renew the boot partition on a usb pen-drive (
not to erase whole usb pen-drive, because ZFS snapshot of a current system may exist already on a second partition);
- once a day in a certain time write a ZFS snapshot of a whole current FreeBSD system with as little as possible impact on a overall server’s productivity (eating CPU, mem, interrupts, system calls, bus bandwidth, etc. AS LITTLE AS POSSIBLE).
In this certain use case no matter how long (in reasonable time frame) all operation running: 10min, 30min, 1h, 2h.
MUCH MORE IMPORTANT:
- receive guaranteed successfully result;
- create minimal possible impact on system’s loading;
Of course, usable addition to this procedure would be prevent other task running that may interrupt this procedure (shutting down or rebooting by SysAdmin by Ansible role or manually, etc…) but this is another step.
I return for this a bit later.
The 2.0 of the drive just means its likely slower to read/write. Depending on the drive it may be fine or may be low durability when regular writes are happening. The slower writes may help slow down reads on the main pool which you said was under heavy I/O already.
Agreed. This is another one benefit from using USB 2.0-only pen-drive.
Whatever tool you use, you will want to be doing incremental transfers to minimize the I/O.
You need to decide if you want to store the snapshot as a file within a filesystem on the drive or receive it to a pool on the drive. Storing it just as a file opens up better compression ratios by piping it through non-zfs compression which uses CPU/RAM to lower writes and space used on the USB drive but you will not be able to work with its contents until it is restored to a ZFS pool. Comparably, ZFS compression can be used to have the data able to be read directly but take less space on the drive. % of space saved will be approximately equivalent to a drive that is that % faster unless bottlenecked by CPU on the compress or decompress step.
Thank You for explanation!
Of course from SysAdmin’s and hardware engineer’s point of view
much better to have direct access to ZFS snapshot data w/o needs to restoring to ZFS pool before.
If I understand You correctly, also any compression in this case not using, so we
also have a much less impact on server’s resources (even we already have AES-NI in CPU/chipset or Intel QAT in PCIe installed).
Need to point You on the importance of having ~20% of free space on whole USB pen-drive:
when this ~20% of whole capacity are free, write operations speed not degrade.
This is true for must of all USB pen-drives, no matter which 2.0/3.0/3.1/3.2 standard used. Because of schematic principles of this type of drives.
So, the TOTAL USB DRIVE SIZE must be >= [BOOT PARTITION SIZE] + [PARTITION FOR ZFS SNAPSHOT] + [20% IF SUM OF ABOVE TWO].
For now, 64/128 Gb would be perfect choice in terms of distributive and system grow.
And may be no any compression needed at all. Am I wrong with this?
I've only managed zfs backups manually still. sysutils/zfsnap(2) are /bin/sh scripts to manage creation and deletion of snapshots but you are on your own to then transfer them. sysutils/zap is a sh script that does that with send/recv steps also being managed; not sure if it has options or requires modifications if trying to control compression. Deciding when to run the script due to load is likely left up to you.
24/7/365 the overall load are constantly.
Unless the system's use will not grow, you should be considering routes to reduce the heavy I/O if it is bottlenecking system operations negatively: more ram, faster disks, downtime to restore from backup if layout gets bad/slow performing, possibly add caching or other device types to the array for speed, review various ZFS paramaters and consider if turning up compression would have a positive impact instead of negative.
Thank You for detailed suggestions!
I try to make all measurements:
- on system without any loading at all;
- on a real production server under real loading, when all parameters that You means are optimized for production use.
And only after this “try to play” with each of this factors.