Why use zfs when ufs will do ?

As mer pointed out in post #7, ZFS saves a LOT of headaches and planning. With UFS (and just about any other file system), you have to do a lot of planning ahead, doing some math, making a firm decision at the start of the installation process, and sticking with it. With ZFS, you can fine-tune a lot of details any time after installation, and update that config to fit the situation at hand. That flexibility, the very fact that such a robust feature is available on FreeBSD, and the fact that it's so well integrated into FreeBSD - that was my tipping point for sticking with FreeBSD and leaving Linux behind. With UFS, if I want to change something later, I can't make the change on a running system. Either I'm stuck with the decisions I made when I was installing FreeBSD, or I'm faced with redoing the whole installation process from ground up. Using ZFS means there's one less reason to reinstall the whole system.

I'm having a desktop FreeBSD 14.2 host with 16GB of ram, and recently using zfs.
My windoze-10 vm guest is needing 8GB of ram. This all worked reliably using ufs.
Now the vm keeps pausing. It seems this failure is caused by zfs using up all the memory.
Is changing back to ufs filesystem the best solution ?
If you can afford it, I'd recommend upgrading the RAM, maybe even max out what your mobo can support. I have a laptop with 16 GB of DDR5 RAM, and VirtualBox was kind of slow on that hardware. 10th gen Intel processors have been reported to be rather slow under FreeBSD, too.
 
Thanks for replies.
Before reverting to ufs, I'll have a go with some zfs tuning as suggested, and hope for positive outcomes.
As for RAM, I'm not rushing to buy outdated DDR3 memory modules for a desktop that is perilously close to its use-by date. (BIOS dated 2014).
So I'm likely stuck with 16GB for a while.
 
As for RAM, I'm not rushing to buy outdated DDR3 memory modules for a desktop that is perilously close to its use-by date. (BIOS dated 2014).
So I'm likely stuck with 16GB for a while.
Just to note that these DDR3 modules are pretty affordable in e-Bay these days.

Also, if you have a free SATA interface, I do advise to add L2ARC cache to the pool in the form of a small SSD drive. This will improve the read speed (maybe not for all use cases, but this is worth to try).
 
Also, if you have a free SATA interface, I do advise to add L2ARC cache to the pool in the form of a small SSD drive. This will improve the read speed (maybe not for all use cases, but this is worth to try).
Yes, I do have spare small SSD drives, and an unused SATA3 interface.
BUT as the whole zfs pool is already on a larger SSD, can L2ARC on SSD make any benefit ?
L2ARC cache and main storage would be operating at same speed, and just adding unnecessary overhead to RAM.
Or am I missing something fundamental here ?
 
Yes, I do have spare small SSD drives, and an unused SATA3 interface.
BUT as the whole zfs pool is already on a larger SSD, can L2ARC on SSD make any benefit ?
L2ARC cache and main storage would be operating at same speed, and just adding unnecessary overhead to RAM.
Or am I missing something fundamental here ?
My opinions, based on what I've learned/read about. Others may have different opinions.

Depending on the workload for the specific zpool, L2ARC may help.
On a root dataset/zpool? Probably not.
On a dataset/zpool that holds data files like audio or video that are primarily READ access? Good chance it will help.
ZFS ARC is in memory, has a max size it can grow to.
Servicing a read request from memory is quicker than servicing from the physical device.
Items/blocks that get read over and over (think multiple users streaming 101 Dalmations for the grandkids) wind up in ARC, if they age out of ARC they fall to the L2ARC device.
Now is servicing from L2ARC device faster than servicing from the original physical device if they are the the same read speed? Maybe, maybe not. I think in general it's more "maybe yes" because the L2ARC device is dedicated for one use, where the zpool/dataset will likely have more than one use.

That's where the package zfs-stats come into play. Use it to figure out your current workload/ARC efficiency. If you have low effiency that means most read requests are going back to the physical device; in that case I think L2ARC would help.
 
Yes, I do have spare small SSD drives, and an unused SATA3 interface.
BUT as the whole zfs pool is already on a larger SSD, can L2ARC on SSD make any benefit ?
L2ARC cache and main storage would be operating at same speed, and just adding unnecessary overhead to RAM.
Or am I missing something fundamental here ?
Sometimes L2ARC has a benefit even when the main drive is SSD. It is hard to measure the exact performance, but just thinking about it - there is an extra interface involved and that may introduce some additional parallelism.

Yes, according to manuals, the L2ARC needs some extra RAM for its indexes. After all it is all about experimenting and also probably depends on the actual hardware.

Personally I have noticed a speed gain in case the main pool is on SSD. Just a wild guess - maybe in some cases the system can read data faster from L2ARC than from the main pool on SSD. You can give it a try and if there is no help, then just to remove the L2ARC drive. This is completely safe.
 
Now is servicing from L2ARC device faster than servicing from the original physical device if they are the the same read speed? Maybe, maybe not. I think in general it's more "maybe yes" because the L2ARC device is dedicated for one use, where the zpool/dataset will likely have more than one use.
Agree that this is may-be and real answers may come out of actual experimentation. Also, SSD-s are not equal.
 
  • Like
Reactions: mer
Subjective opinion based on reviews in many threads dedicated to ZFS.
1. You need more than one or two disks for redundancy and creating an array for unforeseen situations.
2. The disks should be able to rustle quickly and have technologies for fast encryption processing (if anyone uses encryption). The disks should not be from the "for home use" series.
3. A lot of RAM. If you are in virtual machines, then 32 GB and higher. Very high quality!!!
4. Certified and high-quality power supply.
5. High-quality cooling of all this.
6. How much electricity will such a tower produce... do you have extra money?
Otherwise: you are simply using ZFS like a hamster, at your own risk.
If you reread on this forum how many problems arose during a "power surge" in the network, how many problems arose during disk degradation and the inability to "simply, quickly and easily" extract IMPORTANT data from a crumbling array, then I would think, is it worth it...
 
I'm having a desktop FreeBSD 14.2 host with 16GB of ram, and recently using zfs.
My windoze-10 vm guest is needing 8GB of ram. This all worked reliably using ufs.
Now the vm keeps pausing. It seems this failure is caused by zfs using up all the memory.
Is changing back to ufs filesystem the best solution ? (All the memory slots are full.)
This humble desktop daily-driver probably doesn't need all the nice fancy features of zfs.
Thanks for any tips or clues.

Do you have a top(1) output of the state where you run into problems?
 
Back
Top