UFS vs ZFS

Encountered the same problem on a few different systens. I believe it has to do with devices being at end of life and get in a power-up/shutdown loop, leaving the zpool process unresponsive and impossible to end. Result is that zpool can no longer be used until the system reboots. Apparently it's not possible to return to a working state without rebooting the machine.
A bad SATA or SAS disk shouldn't be able cause this. That doesn't happen with other filesystems either.
Currently trying out gstripe and gmirror. Less user-friendly and unified but it looks promising.
I've not used gstripe but have had lots of positive experience with gmirror and gjournal. I found both to be very reliable. In my case I found the geom framework simpler and more straightforward to work with compared to zfs but I'm probably biased by starting with geom/UFS2 first and later on learning zfs.
 
Encountered the same problem on a few different systens. I believe it has to do with devices being at end of life and get in a power-up/shutdown loop, leaving the zpool process unresponsive and impossible to end. Result is that zpool can no longer be used until the system reboots. Apparently it's not possible to return to a working state without rebooting the machine.
A bad SATA or SAS disk shouldn't be able cause this. That doesn't happen with other filesystems either.
Currently trying out gstripe and gmirror. Less user-friendly and unified but it looks promising.
I would replace any and all faulty, end-of-life components ASAFP. In my experience, trying to work around faulty hardware in software always ends in tears. IMHO having ZFS fail loudly on flaky hardware is a feature, not a bug.

I've not used gstripe but have had lots of positive experience with gmirror and gjournal. I found both to be very reliable. In my case I found the geom framework simpler and more straightforward to work with compared to zfs but I'm probably biased by starting with geom/UFS2 first and later on learning zfs.
I had the exact opposite experience. I have a system that has both a gmirror boot volume, and a large ZFS array. The latter was much easier to set up.
 
I would replace any and all faulty, end-of-life components ASAFP. In my experience, trying to work around faulty hardware in software always ends in tears. IMHO having ZFS fail loudly on flaky hardware is a feature, not a bug.


I had the exact opposite experience. I have a system that has both a gmirror boot volume, and a large ZFS array. The latter was much easier to set up.
Flaky hardware? Well, ZFS nor the kernel isn't going to tell me that without explaining the problem in detail. I think it has to do with incomplete or outdated technical specification of controllers, intentional or not
 
Flaky hardware? Well, ZFS nor the kernel isn't going to tell me that without explaining the problem in detail. I think it has to do with incomplete or outdated technical specification of controllers, intentional or not
I would not trust any such hardware with my data. My college advisor had a poster on her wall that said "the squeaky wheel gets replaced." I've learned to live by that maxim.
 
I would not trust any such hardware with my data. My college advisor had a poster on her wall that said "the squeaky wheel gets replaced." I've learned to live by that maxim.
Not buying that. The cause of a squeaky wheel is friction by wear or poor maintainance. It doesn't require replacement of the wheel without questions.
This problem might exist because a reliable storage solution based on economically discarded hardware is a serious enemy to the corporate storage market.
 
With various Linux distros in 2024, GRUB has frequently thrown some scary messages about sectors and writing data improperly or out-of-bounds (something like that); I've seen it on a few computers, different filesystems, and even drives. SMART tests were good, and FreeBSD (UFS and ZFS) nor Windows gave any indication of failure.

Software can seemingly throw messages that aren't true of hardware conditions I guess? (or GRUB got impressively broken and everyone is just rolling with it :p) I basically like the idea of ZFS/etc being loud about any failures/issues so that I can investigate it.
 
I use ZFS in Solaris, which is pretty neat for growing, replacing, updating SO, and other avantages for ASM and databases, but in FreeBSD it was a failure. I had problems with power energy several times in my home with my destkop computer, and lost files all the time. Mostly because of the memory storage COW thing. I do not recomend ZFS for personal use, it's pointless, but for production servers, go for it.
 
I use ZFS in Solaris, which is pretty neat for growing, replacing, updating SO, and other avantages for ASM and databases, but in FreeBSD it was a failure. I had problems with power energy several times in my home with my destkop computer, and lost files all the time. Mostly because of the memory storage COW thing. I do not recomend ZFS for personal use, it's pointless, but for production servers, go for it.
That's unfortunate. I have two desktop machines with FreeBSD at my home, both have ZFS, and they never lost a single file. Well, maybe it's because I have a good UPS from APC? I use ZFS everywhere, even on my personal laptops, and I swear by it. And yes, I give the entire disk to ZFS from get-go.
 
Well, now that this thread got revived.. (note? => I have no idea if I previously commented) ...

If you ask me (= a former SunOS fanboy) then (Open!)ZFS is simply superior by design. Back in the day (before 2024 2022) there was a concessus that using UFS had less overhead concerns because you're in full control of everything. You don't "just" stripe, you don't "just" 'fsck', stuff lke that.

But these days?

At the time of writing I have 2 FreeBSD virtual machines running within Hyper-V (Windows 11), my 14.2 'soon to be upgraded' setup runs on ZFS whereas my 13.x uses UFS. Thing is... I can't really notice the overhead which I used to be able and spot before. Yah, while 13.x runs Wayland & KDE while my 14+ is console only.

Fair comment => I can't fully rule out mistakes on my end because... I only started using Hyper-V right after I decided that wanted to dive back into FreeBSD; so 2 weeks ago ;) However... PowerShell is pretty precise on its stats.

My point being: a lot has changed over time, which should be taken into account as well. But these days? I don't want to work without ZFS anymore if I can help it. Even though I love working with UFS as well.
 
I use ZFS in Solaris, which is pretty neat for growing, replacing, updating SO, and other avantages for ASM and databases, but in FreeBSD it was a failure. I had problems with power energy several times in my home with my destkop computer, and lost files all the time. Mostly because of the memory storage COW thing. I do not recomend ZFS for personal use, it's pointless, but for production servers, go for it.

ZFS is committing things to disk in a way that a sudden OFF should leave the filesystem intact. Of course you can lose files that are currently being written.

Are you saying that you lost other files (not being written at the time of the power fail)? If so, can you describe the hardware in play?
 
I had problems with power energy several times in my home with my destkop computer, and lost files all the time. Mostly because of the memory storage COW thing. I do not recomend ZFS for personal use, it's pointless, but for production servers, go for it.
I have been running a FreeBSD ZFS server at home for about 8.5 years.

Three of my original five WD Reds (CMR) have died along the way. Two are still running (now the slowcoaches in a 4-mirror stripe).

For most of the time the server lived in a farm house where the temperatures ranged from 0C to 40C, and the power was extremely unreliable. The server crashed pretty often.

I eventually added a UPS, some enterprise class SSDs, some enterprise class spinning disks, redundant SAS/SATA controllers, a better motherboard, and a case with superior cooling. Things got more settled.

I don't wish to tempt fate, but I have never lost any data.

Snapshots, headroom aggregation, RAID expansion, data integrity features, easy off-site backups with zfs-send, ...
 
I use ZFS in Solaris, which is pretty neat for growing, replacing, updating SO, and other avantages for ASM and databases, but in FreeBSD it was a failure. I had problems with power energy several times in my home with my destkop computer, and lost files all the time. Mostly because of the memory storage COW thing. I do not recomend ZFS for personal use, it's pointless, but for production servers, go for it.
I have a backup server at home with 4x4GB disks in RAID1+0 and I've not lost any data yet. All my systems use ZFS too. My laptops and desktops all run ZFS. :/ Perhaps cascade disk failures occurred on your system.
 
Back
Top