UFS Why I am using UFS in 2021.

Please RTFM ...
Radically Too Far Mentioned
In relation to:

I'm not so sure
Rewards Taken From Manual
Distance from this statement:

Maybe subscribe to <freebsd-fs> and/or <freebsd-hackers> & kindly ask for the wizzards' hints on this topic?
Reasoning Taken Fustian Magniloquence
To be taken seriously.

I have only used ZFS once briefly and it seemed far more trouble than it was worth. I've got years in using UFS, had countless power outages, a Laptop slammed to the floor while running and never lost a file or had a problem during that time.

Good thing for me I didn't have to RTFM:
jitte@obake:/ $ man ufs
No manual entry for ufs


A MAN() does not a Sorcerer make.
 
That's not true. Please RTFM the -i, -h, -m flags of restore(8). You can restore single files from an UFS dump, and if the dump is saved on a random access device, it can be done quickly. Of course, with a seq. access medium, it'll take more than seconds. Naturally, that also applies to ZFS streams on a tape.
Apologies; I wasn’t clear enough; I should have said “dump on tape” not “dump/tape” and more explicitly focused on the time it takes.

Having restored directories from incremental backups over a set of tapes in the past, it’s not.... fast. (Which is absolutely fine for a secondary backup; tapes have a place in the backup environment, to be sure.)

Thanks for the insights; I was trying to provide a similar, for example regarding ZFS’s stance on forward-compatible stream formats.
 
This is not true. Search the internet about HDD sector ECC, URE and RAID

Example massage of hardware raid periodic surface scan (You call this ZFS File System Integrity zpool scrub)
I don't think all drives have ECC. There's a reason that /proc statistic exists for blocks of disparity. Even if the drives support ECC, there's still not a certain guarantee that it's infallible.

RAID 1, by definition, is a mirror. If you have two copies of the data and they differ, how do you know which is correct? Now with certain drives, maybe it's a non-issue, but at least with a software level checksum validation you know which copy is correct. Otherwise, you'd need a 3 drive RAID 1 to have a majority consensus on which is correct.
 
You didn't search it right? Here let me help you with some info from "The International Disk Drive Equipment and Materials Association"
 
sed s%Certain%most. Espc. one of the most regarded features of zfs(8) are instant snapshots, thus updates are super safe when using boot environments. With UFS, you should seriously consider to either have two root filesystem partitions to switch between, or even better: use a 3-way mirror and take out one disk of that to perform the update on.
While ZFS boot environments are nice, you can do perfectly without them. I regularly use nextboot(8) (on UFS-based machines) to test new kernels and automatically fall back to the previous kernel if the new one fails for some reason. This is very convenient, even when upgrading remote machines without console access. Apart from that, I always make a backup before updating a machine, of course.

You may want to subscribe to the mailing lists <freebsd-fs> & <freebsd-geom> to follow current issues... yes, still after that stuff has matured for so many years.
That’s because that “stuff” constantly gets new features. UFS is not a dead end. For example, quite recently, Kirk added metadata checksums to UFS.

BTW, I don’t think it is really necessary to subscribe to the -fs or -geom mailing lists. Serious issues are rather rare, and if one occurs, it is usually posted to the -current and/or -stable mailing lists – you should subscribe to these if you run -current or -stable, of course. If you don’t, then you really needn’t worry. I can’t remember a release that had serious UFS breakage … if there was one, it must have been in the previous century.
 
UFS is absolutely great - its very stable and don't need so many memory as ZFS do! I am using UFS mostly everywhere on cloud & bare-metal servers with only one exception - if it is not an corporate file server - there is a ZFS is a king with any-type RAID that you may want to construct, snapshots (you can easily configure samba to offer your colleagues access to snapshotted files), file compression on the fly, de-duplication, ability to create directories with excessive file copies (yes, sometimes you need to have more than 1 copy of the particular file) - all of it just from the box, but for a price - you need a lot of memory, sometimes astounding amounts if you turn on de-duplication on the fly. Choosing the right file system always is hard decision - ZFS require memory and allow you to make hard things really easy, UFS just perfectly works in every case... Ask yourself what you actually want to store, how it will be used and which abilities of file system you need to utilize.. FreeBSD have very good file systems to choose from - just pick one that suits you best!
 
While ZFS boot environments are nice, you can do perfectly without them. I regularly use nextboot(8) (on UFS-based machines) to test new kernels and automatically fall back to the previous kernel if the new one fails for some reason. This is very convenient, even when upgrading remote machines without console access. Apart from that, I always make a backup before updating a machine, of course.
I was referring to freebsd-update(8) to a new release or security fix patch level, i.e. incl. some changes in world; of course for testing only a new kernel w/o new world the nextboot(8) method is perfectly enough. Also remember that you as a developer might have a better grip on your system than us mere mortals.
 
There are many reason why a server with ECC memory, SAS harddrives with ECC buffers, and ECC at the storage level would get a bit error. No matter what kind of protections we add at the hardware level, a defect or just plain bad luck results in the wrong data getting written. It's a statistical thing.

My cousin had a situation where his Intel 100Gb SAN card was flipping a bit every few pettabytes written. Turned out it was a driver issue. The datagram CRCs were fine. ZFS was only complaining after a scrub.
 
duplication on the fly. Choosing the right file system always is hard decision - ZFS require memory and allow you to make hard things really easy, UFS just perfectly works in every case... Ask yourself what you actually want to store, how it will be used and which abilities of file system you need to utilize.. FreeBSD have very good file systems to choose from - just pick one that suits you best!
And, if it is not broken, do not fix it!
 
I've never really thought about it (UFS vs ZFS) as I run a simple setup and, now I think about it, UFS here is like XFS - They both offer everything I need and are rock solid in my experience.

I'm in no ware comparing technical differences between them, just a casual thought from me who does not think about it ;)
 
Bad luck, wrong name.
man 7 ffs

How did I ever get by not having seen that page? Never having used fstab or any other option mentioned on that page, including mkdir. I do that through x11-fm/xfe, su to become root in that terminal and summon xfe from there if need be.

FreeBSD is the most usr friendly desktop oriented OS I have ever used since my fingers laid touch to an AppleII in '93. There are so many different ways to do the same thing and make it work for you.

I'd even go as far to say FreeBSD is the Ultimate desktop OS. For me.
 
You didn't search it right? Here let me help you with some info from "The International Disk Drive Equipment and Materials Association"
Just because it's in the spec doesn't mean corruption can't happen.

It may also happen at the interface level. And the cases where I saw it was pre 4K block drives. Maybe it's less of an issue with them.

Either way, it's probably not an accident that ZFS ensures data integrity. If what you were describing worked perfectly, there would be no need. Why bother with the zfs scrub command at all, then?
 
Is ext4 much better than UFS?
I have used ext2, ext3, and ext4 for many years with no data loss. Ext2 lacks journaling. Ext3 and ext4 I would say are about equivalent to UFS. It is nice though that UFS has option to put journal on a separate drive and is also able to utilize many other nice GEOM features. I thought initially that gjournal is old fashioned. Then I realized it is actually very similar to ZIL in ZFS. With appropriate configuration, in an environment that is not doing continuous hard drive reads or writes, gjournal can actually speed up disk writes.
 
I never new before about gsched and nextboot. Thanks for those suggestions. I think it is great that FreeBSD offers the options of two great file systems. If your data is so vital that you are unable to tolerate for an instant a single uncorrectable read error in one petabyte of data read then ZFS is a great option for you. Of course ZFS also has other fine attributes beyond validation of read bits. If you are willing to tolerate one uncorrectable read error per petabyte read because you can restore the file from a backup ( possibly on a tape ) and it makes you happy to save energy by using less RAM then UFS is a great choice. Of course UFS also has other fine attributes beyond utilizing a minimal amount of RAM such as minimizing fragmentation and venerable dump/restore.
 
This has been a useful discussion for me, thank you all. I'm still using UFS (and hardware RAID) but keep reading how I really should be using ZFS, and I dip my toe in every now and then.

But then I read lots and lots and lots of questions about ZFS and think maybe it needs a lot more careful R&D and experimentation by me before I try and use it in earnest - just in case I encounter any of the issues others describe. And ZFS does seem to have some very good features that I can see how would be valuable.

But procrastination is so much easier, and if UFS works for me now (and it looks like it does for many other people too), there's no hurry.
 
UFS and ZFS both have their justification, and both are tier-1 file systems in FreeBSD, and both are actively maintained and constantly improved. My only advice is: Learn about both of them, their respective advantages and disadvantages, and then make an educated decision which one is better for your use cases.

Personally I still use UFS in quite a lot of cases. In part that has got to do with the fact that I am “old school”, and I am very familiar with UFS, up to the point that I am able to fix problems with a hex editor on the raw device of a partition. I wouldn’t be able to do that with ZFS. Because of that, using UFS gives me a better feeling, because I have better control. This may be just a psychological effect, though.

For example, some years ago, I had a long processing job running for several days, when I accidentally removed its output file. It wasn’t in any backup at that time. The process was still running and held the file open, so the data was still on the disk, but there was no directory entry anymore.
So I did this: First I suspended the process (SIGSTOP) so it won’t terminate and release the file. Then I typed sync, waited 30 seconds (sysctl kern.filedelay), and switched the file system to synchronous mode (mount -u -o sync), although I think that latter step was not required, but I wouldn’t want to take chances. Then I located the inode of the file and examined it with a hex editor on the raw device of the file system. As I expected, the link count was zero. I edited it to be 1, then I made up a directory entry in the root directory of the file system that pointed to that inode. And then – I hit the power switch (I mean physical power, i.e. no clean shutdown). When the system booted, fsck fixed the remaining details, and when I logged in, the file was there. All of that took me like 15 minutes. Losing the file would have cost me several days.
I wouldn’t have been able to do all of that if the file was on ZFS.
 
This has been a useful discussion for me, thank you all. I'm still using UFS (and hardware RAID) but keep reading how I really should be using ZFS, and I dip my toe in every now and then.
I do not really understand why people would want ZFS on a laptop or such. The boot-environments might be the only useful feature there, but these also make the boot process more complicated.
Otherwise ZFS is fun when you want to use
* some number of disks
* a fine grained mount structure with many distinct filesystems (and maybe quotas)
* applications that benefit from copy-on-write (e.g. postgres)
* non-filesystem volumes
and a few other things which all do usually not apply to normal desktop/laptop usage.
 
I do not really understand why people would want ZFS on a laptop or such. The boot-environments might be the only useful feature there,
?
but these also make the boot process more complicated.
??? Please elaborate.
Otherwise ZFS is fun when you want to use
* some number of disks
* a fine grained mount structure with many distinct filesystems (and maybe quotas)
* applications that benefit from copy-on-write (e.g. postgres)
* non-filesystem volumes
and a few other things which all do usually not apply to normal desktop/laptop usage.
Last not least:
  • merely unlimited flexibility.
  • data integrity through strong checksums
  • built-in compression
  • instant snapshots
  • Super easy backup/sync with zfs send/receive e.g. between two laptops
  • per-dataset encryption coming with OpenZFS (the alternative pefs(8) on UFS or old ZFS is fine, though)
All that does well apply to desktop/laptop.
 
?

??? Please elaborate.
If You boot from ZFS, you need something that properly understands ZFS before you even load a kernel.
And as ths is separate from the kernel, it arithmetically (or even geometrically, as it might accidentially damage things also) adds up to all the things that potentially can go wrong.

Now it all depends on what you do when things do go wrong. What I do is, I try to fix them - and in order to fix them, I need a running unix because that is my toolset necessary to fix things. And a running unix appears as soon as I am able to mount root, /usr and /var.
Then, one can have more or less entertainment while trying to get so far (entertainment in this sense may encompass cdrom disks a couple of years old and no longer readable, usb sticks that may or may not boot depending on unintellegible bios options, etc.)
Obviousely, approaches may vary, but what I am doing, is, I have three ufs filesystems, very small, very handy, easy to copy or move around. And that is all that is needed - because these basically can be fixed (as olli@ described above), and they can also be replaced, or a dup kept. With ZFS you can do all these things within the ZFS - but if that fails, then it all fails, and the "otherwise please reinstall the pool from backup" message is very common on ZFS (which is also fine, but when concerning the bootvolume, this translates to bare-metal-recovery - which obviousely is again a nice entertainment and should be trained occasionally).
 
I do not really understand why people would want ZFS on a laptop or such. The boot-environments might be the only useful feature there, but these also make the boot process more complicated.
Otherwise ZFS is fun when you want to use
Right now, I am writing this message on laptop with ZFS. Snapshots is one thing I am using, the other is the ease of cloning the hard drive and growing the size when needed.

But after all, it may be just a matter of taste.
 
Why not UFS for system and ZFS for data files (/home, /var).

It depends. My consideration was, why would one bother with ZFS on a laptop at all, where many people seem to not even create separate filesystems. And specifically, if one would have to go and learn ZFS first.

But, using mixed UFS+ZFS is again a different thing. You need to think through the failure/recovery schemes and the encryption schemes (if applicable) for both parts individually. This might be rather something for people who already know precisely what they are doing.

OTOH, starting with ZFS for a limited group of mountpoints (e.g. some specific application or an external disk) while having everything else in ufs, is a nice way of learning ZFS and getting accustomed to it. That's what I did when I found postgres would benefit from it - and then the adminsitration was so nice that I expanded it.
 
My consideration was, why would one bother with ZFS on a laptop at all,
I can only speak for myself, but
  • I found UFS SU+J unreliable when shortly testing it on 11.0 (fsck(8) found serious issues repeatedly), while ZFS seems to do a good job for data integrity even without redundancy in place
  • I profit sometimes from quick and easy snapshots and clones even on that "client machine" (which of course depends a lot on your usecases)
  • Adding to that: zvols are nice on a laptop as well for backing any virtual machines you might want to have.
 
Back
Top