FreeBSD uses a big amount of RAM

With ZFS and without GUI, FreeBSD 14.0 uses ~350 MB of RAM(idle).
With UFS and without GUI, FreeBSD 14.0 uses ~200 MB of RAM(idle).

AntiX GNU/Linux uses 150 MB of RAM(idle), but it has GUI(filesystem i used is ext4) and Conky widget.

Why it uses that much RAM?! I created a fork of FreeBSD(LiteBSD) to fix it.
 
by definition ZFS is likely to use more RAM than UFS.
That's certainly true. ZFS likes memory, a lot of memory.

Also keep in mind that Linux and FreeBSD have a different definition of "free" memory. You cannot compare the two just by looking at some number.
 
Also keep in mind that Linux and FreeBSD have a different definition of "free" memory. You cannot compare the two just by looking at some number.
Very true.
It's always an argument in the embedded world. Some people want to keep some level of resources in reserve "just because", others "if it's not being used, why have it". I think the reality lies in between with reserves being say 5% to 10% so even at max design load (say number of simultaneous calls for telephony) you can handle a little bit of overload without crashing.
 
There is different kind of memory:
sysctl vm.stats.vm.v_wire_count
sysctl vm.stats.vm.v_active_count
sysctl vm.stats.vm.v_laundry_count
sysctl vm.stats.vm.v_inactive_count
sysctl vm.stats.vm.v_cache_count
sysctl vm.stats.vm.v_free_count
sysctl vm.stats.vm.v_page_count
My PC has 32GB so i don't care too much.
 
I don't t know if it's true or to be right for every sys-config, but I once read
rule-of-thumb to get best performance on ZFS:
1GB RAM for every 1TB within zpool.
 
I don't t know if it's true or to be right for every sys-config, but I once read
rule-of-thumb to get best performance on ZFS:
1GB RAM for every 1TB within zpool.
A rule that is certainly obsolete, with modern access patterns. Storage is becoming more archival, with a smaller fraction of objects (files, directories) being accessed. Therefore they don't have to be in cache.

On the OP's question: The correct amount of free memory is zero. The correct amount of memory that can be quickly freed and immediately reused when a program needs it is as much as possible.
 
With ZFS and without GUI, FreeBSD 14.0 uses ~350 MB of RAM(idle).
With UFS and without GUI, FreeBSD 14.0 uses ~200 MB of RAM(idle).

AntiX GNU/Linux uses 150 MB of RAM(idle), but it has GUI(filesystem i used is ext4) and Conky widget.

Why it uses that much RAM?! I created a fork of FreeBSD(LiteBSD) to fix it.
Are you referring to this? https://github.com/sergev/LiteBSD
Because as far as I understand this project is based on 4.4BSD not FreeBSD; are you talking about something personal and not publicly available?

Moreover, and excuse me if my question sounds stupid, you claim you "fixed" the problem by creating a fork and yet you have no idea for what reason FreeBSD has such memory usage? How where you able to achieve a lower memory usage then?
 
"Memory usage" is pretty vague and not a standard measurement across operating systems.

It'd be a better measurement of how much swap space is being used and when, as that would actually potentially highlight an issue. Realistically, if it ain't broke, don't fix it. Or just somehow frig the measurement to make your preferred operating system look more fabulous.

Also for perspective, my Windows 11 machine has been grinding to a halt lately and paging madly, timing out on key processes. 16GB of DDR5 isn't cutting it anymore.
 
My context is different. Specifically talking about embedded systems, leaving some resources (memory, disk, cpu, whatever) free at max designed load is often a good thing. Why? Because the difference between max load and overload isn't very much. Leaving a little bit of headroom can be the difference between crashing and recovering. In telephony world it's called "5-nines" (99.999% uptime).

It's always about understanding your specific workload and optimizing for that.
What is "best" for me may not me "adequate" for you.
Also for perspective, my Windows 11 machine has been grinding to a halt lately and paging madly, timing out on key processes. 16GB of DDR5 isn't cutting it anymore.
Windows. Expand to use everything and claim it not enough. :)
 
I'm about to go into a philosophical and OS design rat hole:

Embedded systems tend to not have file systems; or if they do, they don't do much file IO. Therefore they don't have buffer cache. General-purpose computers are different: they do file IO. And clean pages in the buffer cache are the best way to "use" memory. Why? Keeping data cached means there is a chance it will be reused. And if a process wants to allocate memory (sbrk and all those system calls), the kernel can give up clean pages instantaneously (I'll give extra details about that in a moment). So if the measurement of "free" measurement does not include clean cache pages, then it should be (near) zero.

And to be 100% clear: This only applies to *CLEAN* pages in the buffer cache (or file system cache), those where an up-to-date copy exists on disk. It does not apply to *DIRTY* pages, which are currently in memory but still have to be written to disk: those can't be instantaneously handed to a process who wants to allocate memory.

Here is a tiny exception: In a multi-threaded or SMP environment, releasing a memory page from clean buffer cache will probably require locking. And thread entry/exit and lock management may require a very small amount of memory. So reserving one extra page (4KB) per thread or per core to guarantee forward progress may be a good idea or even necessary. But that is a tiny amount compared to modern multi-GB machines.

This leaves the (OS-internal) problem of knowing when to write dirty pages back to disk. This is a very difficult compromise. Writing them back too late means too much memory is tied up and can't be used before doing disk IO, which can lead to really bad performance. Writing them back too early means smaller IOs, and needlessly writing the same block multiple times, and can also lead to really bad performance. The art (not science!) of system design is finding the perfect middle ground here, which is quite workload specific.

taiwan740 said:
Also for perspective, my Windows 11 machine has been grinding to a halt lately and paging madly, timing out on key processes. 16GB of DDR5 isn't cutting it anymore.
My FreeBSD server has been running for ~10 or 15 years with 4 GB of memory, using ZFS, and running all the usual network/web services, without ever having any memory constraints. Actually, for much of its life it was only able to use 3 GB (because of i386 architecture). The big difference to your Windows machine? No GUI.
 
Are you referring to this? https://github.com/sergev/LiteBSD
Because as far as I understand this project is based on 4.4BSD not FreeBSD; are you talking about something personal and not publicly available?

Moreover, and excuse me if my question sounds stupid, you claim you "fixed" the problem by creating a fork and yet you have no idea for what reason FreeBSD has such memory usage? How where you able to achieve a lower memory usage then?
No, my own LiteBSD, I haven't knew that there is LiteBSD already, I'm GNUAn on GitHub and Codeberg. LiteBSD is not done yet.
 
With ZFS and without GUI, FreeBSD 14.0 uses ~350 MB of RAM(idle).
With UFS and without GUI, FreeBSD 14.0 uses ~200 MB of RAM(idle).

AntiX GNU/Linux uses 150 MB of RAM(idle), but it has GUI(filesystem i used is ext4) and Conky widget.

Why it uses that much RAM?! I created a fork of FreeBSD(LiteBSD) to fix it.
Even UFS without GUI uses 200 MB of RAM, AntiX uses ~100 MB, but with GUI and bunch of bloatware
 
I'm about to go into a philosophical and OS design rat hole:

Embedded systems tend to not have file systems; or if they do, they don't do much file IO. Therefore they don't have buffer cache. General-purpose computers are different: they do file IO. And clean pages in the buffer cache are the best way to "use" memory. Why? Keeping data cached means there is a chance it will be reused. And if a process wants to allocate memory (sbrk and all those system calls), the kernel can give up clean pages instantaneously (I'll give extra details about that in a moment). So if the measurement of "free" measurement does not include clean cache pages, then it should be (near) zero.

And to be 100% clear: This only applies to *CLEAN* pages in the buffer cache (or file system cache), those where an up-to-date copy exists on disk. It does not apply to *DIRTY* pages, which are currently in memory but still have to be written to disk: those can't be instantaneously handed to a process who wants to allocate memory.

Here is a tiny exception: In a multi-threaded or SMP environment, releasing a memory page from clean buffer cache will probably require locking. And thread entry/exit and lock management may require a very small amount of memory. So reserving one extra page (4KB) per thread or per core to guarantee forward progress may be a good idea or even necessary. But that is a tiny amount compared to modern multi-GB machines.

This leaves the (OS-internal) problem of knowing when to write dirty pages back to disk. This is a very difficult compromise. Writing them back too late means too much memory is tied up and can't be used before doing disk IO, which can lead to really bad performance. Writing them back too early means smaller IOs, and needlessly writing the same block multiple times, and can also lead to really bad performance. The art (not science!) of system design is finding the perfect middle ground here, which is quite workload specific.


My FreeBSD server has been running for ~10 or 15 years with 4 GB of memory, using ZFS, and running all the usual network/web services, without ever having any memory constraints. Actually, for much of its life it was only able to use 3 GB (because of i386 architecture). The big difference to your Windows machine? No GUI.
I don't use windows
 
Back
Top