ZFS How well does openzfs run on FreeBSD?

So as some on here know I have been using FreeBSD for a long time now, and was an early adopter with ZFS.

For me the ZFS experience has generally been very positive on FreeBSD.

However I started also using it in Linux with Proxmox a couple of years ago, whilst I haven't had any crisis with it, there is noticeable performance regressions, especially when running scrub.

if I run scrub on "any" of my FreeBSD servers the impact is just background noise, the server's carry on able to do what they do, but I have noticed on my Proxmox machines scrub practically takes down the machine with extreme i/o latency during a scrub, not only is scrub much more intrusive but it also takes much longer. I believe this is due to the removal of scrub slowdowns when it detects i/o activity. All my machines are spindle based for reference.

I dont know the reason's leading to FreeBSD migrating to openzfs, however reading the TrueNAS statement on multiple OS support I speculate they were involved in the decision.

My fear is now openzfs is in FreeBSD from 13 onwards, does it inherit these regressions or are people finding it runs as well as before?
 
I'm on 13-RELEASE, playing with ZFS and related stuff - and I can say that SSD's are a better option than platter-based stuff. I remember reading somewhere (can't remember where exactly, though, or else I would provide a link) that ZFS was designed with SSD's in mind to begin with. Or maybe it was a conclusion I came to after reading ZFS docs. Yeah, some of the SSD's will overheat if you scrub your pools too much, so stick with reliable brands. But beyond that, SSD's are MUCH cheaper these days than they were in 2017, like by a factor of 3. I ditched platters back in 2012.
 
… from 13 onwards, …

I can't recall when I switched from 12.0-CURRENT to 13.0-CURRENT, re: <https://www.freebsd.org/news/newsflash/#2019-04-19:1> I might have occasionally tested in the early days.

I have never encountered a significant problem with OpenZFS on 13.0-CURRENT or 14.0-CURRENT.

… not only is scrub much more intrusive but it also takes much longer. … All my machines are spindle based for reference. …

I haven't noticed problems when working during scrubs. There's (naturally) some impact, but it's not excessive.

Code:
root@mowa219-gjp4-8570p-freebsd:~ # geom disk list ada0
Geom name: ada0
Providers:
1. Name: ada0
   Mediasize: 1000204886016 (932G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e4
   descr: HGST HTS721010A9E630
   lunid: 5000cca8c8f669d2
   ident: JR1000D33VPSBE
   rotationrate: 7200
   fwsectors: 63
   fwheads: 16

root@mowa219-gjp4-8570p-freebsd:~ # geom disk list da2
Geom name: da2
Providers:
1. Name: da2
   Mediasize: 500107862016 (466G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e3
   descr: StoreJet Transcend
   lunid: 5000000000000001
   ident: X3E1SAKRS
   rotationrate: unknown
   fwsectors: 63
   fwheads: 255

root@mowa219-gjp4-8570p-freebsd:~ # zpool status -v
  pool: Transcend
 state: ONLINE
  scan: scrub repaired 0B in 04:01:49 with 0 errors on Thu Sep 16 07:05:21 2021
config:

        NAME                   STATE     READ WRITE CKSUM
        Transcend              ONLINE       0     0     0
          gpt/Transcend        ONLINE       0     0     0
        cache
          gpt/cache-transcend  ONLINE       0     0     0

errors: No known data errors

  pool: august
 state: ONLINE
  scan: scrub repaired 0B in 02:45:23 with 0 errors on Thu Sep 16 05:48:48 2021
config:

        NAME                STATE     READ WRITE CKSUM
        august              ONLINE       0     0     0
          ada0p3.eli        ONLINE       0     0     0
        cache
          gpt/cache-august  ONLINE       0     0     0
          gpt/duracell      ONLINE       0     0     0

errors: No known data errors
root@mowa219-gjp4-8570p-freebsd:~ # date ; uname -aKU
Sat Sep 18 14:54:06 BST 2021
FreeBSD mowa219-gjp4-8570p-freebsd 14.0-CURRENT FreeBSD 14.0-CURRENT #109 main-n249408-ff33e5c83fa: Thu Sep 16 01:11:04  2021     root@mowa219-gjp4-8570p-freebsd:/usr/obj/usr/src/amd64.amd64/sys/GENERIC-NODEBUG  amd64 1400033 1400033
root@mowa219-gjp4-8570p-freebsd:~ # freebsd-version -kru
14.0-CURRENT
14.0-CURRENT
14.0-CURRENT
root@mowa219-gjp4-8570p-freebsd:~ # zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
Transcend   464G   427G  37.2G        -         -    54%    91%  1.00x    ONLINE  -
august      912G   233G   679G        -         -     4%    25%  1.00x    ONLINE  -
root@mowa219-gjp4-8570p-freebsd:~ #
 
hat ZFS was designed with SSD's in mind to begin with. Or maybe it was a conclusion I came to after reading ZFS docs.
I'd imagine it was your conclusion. ZFS was designed before SSD's were commonly available, and to look after huge quantities of data.
Back in 2008 while at Sun, I recall some local ZFS engineers preferring to play with a Sun Fire "Thumper" (which contained 48 spinning hard drives by design) because the system had been designed with ZFS in mind and didn't have any hardware RAID built in.
 
I'd imagine it was your conclusion. ZFS was designed before SSD's were commonly available, and to look after huge quantities of data.
Back in 2008 while at Sun, I recall some local ZFS engineers preferring to play with a Sun Fire "Thumper" (which contained 48 spinning hard drives by design) because the system had been designed with ZFS in mind and didn't have any hardware RAID built in.
Sometimes you can't even tell what inspired what, exactly, which way the inspiration went. Based on that quote, one could imagine that SSD's were influenced by the fact that there's this newfangled filesystem like ZFS around that seems to be made with SSD's in mind, rather than the other way around.

Nowadays, those 48 drives can be replaced by SSD's on a whim, and the source code to drive that contraption wouldn't change, not much, anyway. ;)
 
… ZFS was designed before SSD's were commonly available, and to look after huge quantities of data.
Back in 2008 while at Sun, …

Flashback:

1632013757875.png


Hint: the racked pool in the foreground is not connected to the racked equipment in the background.

How to Save the World with ZFS and 12 USB sticks: 4th Anniversary Video Re-Release Edition

A Wayback Machine capture of the 2007 blog post: <https://web.archive.org/web/2013051...e.com/constantin/entry/csi_munich_how_to_save>
 
I'm on 13-RELEASE, playing with ZFS and related stuff - and I can say that SSD's are a better option than platter-based stuff. I remember reading somewhere (can't remember where exactly, though, or else I would provide a link) that ZFS was designed with SSD's in mind to begin with. Or maybe it was a conclusion I came to after reading ZFS docs. Yeah, some of the SSD's will overheat if you scrub your pools too much, so stick with reliable brands. But beyond that, SSD's are MUCH cheaper these days than they were in 2017, like by a factor of 3. I ditched platters back in 2012.

Sure, if you have unlimited funds, go for it. 8 TB SSDs are around $1,000 each (looking at Newegg) - these are not Enterprise-grade SSDs. Add another 70% if you want Enterprise SSDs That would be $20,000 / $34,000 for the capacity of my main storage server. Using Enterprise-grade HDDs (WD Ultrastar), I paid a bit under $5,000 for the same capacity. Speed is still more than sufficient for all purposes I have. SSDs have their place where high speed is paramount. If you need cheap, reliable mass storage, they are still the wrong choice.

Generally, "SSDs are so much better" comments are simply uninformed.

BTW: if your SSDs overheat, your setup is deficient. Use a better ventilated case and SSDs with coolers, if need be.
 
Sure, if you have unlimited funds, go for it. 8 TB SSDs are around $1,000 each (looking at Newegg) - these are not Enterprise-grade SSDs. Add another 70% if you want Enterprise SSDs That would be $20,000 / $34,000 for the capacity of my main storage server. Using Enterprise-grade HDDs (WD Ultrastar), I paid a bit under $5,000 for the same capacity. Speed is still more than sufficient for all purposes I have. SSDs have their place where high speed is paramount. If you need cheap, reliable mass storage, they are still the wrong choice.

Generally, "SSDs are so much better" comments are simply uninformed.

BTW: if your SSDs overheat, your setup is deficient. Use a better ventilated case and SSDs with coolers, if need be.
I would frankly stick by what I said. Even consumer grade SSD's run cooler, and are lighter than equivalent capacity HDD's. My usage scenarios don't involve that much data, so I don't see the need to blow that much $. If I were providing infrastructure for stuff like a vaccine study and data analysis, then sure, I'd invest the money it takes to buy the appropriate equipment that is up to the task. But at the moment, I have just barely a couple TB of animes that I like to curate and watch. ?‍♂️:beer:
 
I would frankly stick by what I said. Even consumer grade SSD's run cooler, and are lighter than equivalent capacity HDD's. My usage scenarios don't involve that much data, so I don't see the need to blow that much $. If I were providing infrastructure for stuff like a vaccine study and data analysis, then sure, I'd invest the money it takes to buy the appropriate equipment that is up to the task. But at the moment, I have just barely a couple TB of animes that I like to curate and watch. ?‍♂️:beer:
Ah, just having "barely a couple of TB" on some SSDs qualifies you to give general advise about storage media in general?

That is exactly what I meant by "uninformed". Different use cases call for different media - in many cases, that means HDDs.

BTW: you mentioned that your SSDs are overheating. I commented that indicates that your build is wrong. I have up to 45 HDDs in a JBOD, none of them runs hot. "Hot" meaning outside of recommended specs at any time and under any operational circumstances - whether we are talking SSDs or HDDs.
 
Just consumer grade stuff. If you look around you, a surprising number of people buy laptops on ebay and pawn shops, and take whatever the machine came with. They care more about convenience of 'cloud storage' than for shopping around for quality stuff.

I did not say anything about my own SSD's overheating. Yeah, my remark is based on personal experience from 2005, when I first tried to format a 16GB SSD. First few times, it was OK, but after that, the SSD started getting warm due to repeated formatting. SSD's came a long way since.
 
I did not say anything about my own SSD's overheating. Yeah, my remark is based on personal experience from 2005, when I first tried to format a 16GB SSD. First few times, it was OK, but after that, the SSD started getting warm due to repeated formatting. SSD's came a long way since.
ZFS does not need any formatting.
 
I dont know the reason's leading to FreeBSD migrating to openzfs, however reading the TrueNAS statement on multiple OS support I speculate they were involved in the decision.
It was basically the fear to become an island. FreeBSD's implementation progressed much slower then OpenZFS in terms of new features/innovations. So it was either playing the catching up game with OpenZFS, or just adopting it.
 
Formatting is just writing the minimum data structures of a file system required so that it can work to the HDD/SSD. What you mean with "formatting" is in reality the bad sector check/zeroing all sectors, which is a HDD issue, because SSDs have their builtin wear leveling control and deal with such things automatically.

Formatting can be rather fast, depending on the file system. XFS or ZFS are known to be pretty fast there, ext4 for example writes a lot of more structure to the HDD and takes therefore a longer time.
 
Back
Top