ZFS likes memory, and lots of it. If you're struggling with memory issues you can limit the amount of ARC ZFS uses by setting vfs.zfs.arc_max in /etc/sysctl.conf.
CPU: 0.1% user, 0.0% nice, 0.1% system, 0.0% interrupt, 99.7% idle
Mem: 20G Active, 83G Inact, 140M Laundry, 20G Wired, 1572M Buf, 1626M Free
ARC: 15G Total, 76K MFU, 15G MRU, 16K Anon, 27M Header, 397K Other
15G Compressed, 15G Uncompressed, 1.00:1 Ratio
Swap: 128G Total, 128G Free
What do you mean? Wired memory is physical memory reserved by the kernel, not virtual memory. Give a look at https://wiki.freebsd.org/Memory.I see but is it normal for the ZFS to used the wired memory ins
is it normal for a ZFS to use a large amount of wired memory and took a while before it releases it?
As SirDice has already stated, ZFS likes memory, and it is _normal_ for zfs to allocate pages of memory that it doesn't want swapped to disk. (Hence "wired")
It's also _normal_, in the case of ZFS that these pages of memory stay in use for as long as ZFS needs.
There's no sense in "premature deallocation" of pages that ZFS might need to use in the future.
No, of course it's not normal: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594.
In my (desktop) experience, without setting vfs.zfs.arc_max to some specific limit ZFS gobbles memory until there is nothing left for running actual applications. I fail to see what's "normal" about that.
The amount of memory used for ARC by default is (IIRC) 100% minus 1GB; however it is also dynamic, which means if something needs memory that is being used for ARC, ZFS gives it (but does not work well for some).
will adjusting the vfs.zfs.arc_max help?
What's "best" is different for different people. My suggestion is ignoring it altogether unless you have a specific problem to solve. ZFS using as much memory as possible is not a problem unless something else is effected.
The advice provided by shkhln to set vfs.zfs.arc_max is also fine.
https://wiki.freebsd.org/ZFSTuningGuide
If you are truly concerned about ZFS's memory usage that page will give you pointers to clamp down on the ARC cache.
You are giving us here very little to work with. Could you please give us machine's physical specs, purpose of the hardware, and the size of ZFS pools. One of my main file servers with multiple ZFS pools totaling 250 TB is rock stable with 128 GB of RAM. I have no less than 50 NFS clients leaching on that thing at any given moment.Ok thank you, the whole system is really affected when the wired memory reached 52GB the system is already frozen and I will need to reboot it.
You are giving us here very little to work with. Could you please give us machine's physical specs, purpose of the hardware, and the size of ZFS pools. One of my main file servers with multiple ZFS pools totaling 250 TB is rock stable with 128 GB of RAM. I have no less than 50 NFS clients leaching on that thing at any given moment.
I would be suspicious of hardware issues.I'm going to use it just a data storage server, but couldn't start with it since the server is locking up when it reached 52GB on wired memory.
last pid: 96541; load averages: 0.70, 0.58, 0.44 up 13+22:55:53 14:25:39
46 processes: 1 running, 45 sleeping
CPU: 0.0% user, 0.0% nice, 2.3% system, 0.0% interrupt, 97.7% idle
Mem: 8728M Active, 24G Inact, 58G Wired, 3631M Free
ARC: 32G Total, 2632M MFU, 27G MRU, 18M Anon, 1843M Header, 432M Other
30G Compressed, 63G Uncompressed, 2.13:1 Ratio
Swap: 8192M Total, 8192M Free
dice@hosaka:~ % zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
stor10k 1.09T 26.0G 1.06T - 4% 2% 1.00x ONLINE -
zroot 145G 48.4G 96.6G - 70% 33% 1.00x ONLINE -
dice@hosaka:~ % sudo vm list
Password:
NAME DATASTORE LOADER CPU MEMORY VNC AUTOSTART STATE
case default bhyveload 2 2048M - Yes [2] Running (1460)
freebsd11-img default uefi 1 512M - No Stopped
jenkins default bhyveload 4 16384M - Yes [5] Running (1898)
kdc default uefi 2 2048M 0.0.0.0:5900 Yes [1] Running (1271)
lady3jane default uefi 2 4096M - No Stopped
sdgame01 default grub 2 4096M - No Stopped
tessierashpool default bhyveload 4 8192M - Yes [4] Running (69648)
build11 stor10k bhyveload 4 8192M - No Running (46143)
plex stor10k bhyveload 4 8192M - Yes [6] Running (42492)
wintermute stor10k bhyveload 4 8192M - Yes [3] Running (24523)
Then set zfs arc max to, say, 40GB to have a safe margin. Note you have to set the byte count like 40000000000, sysctl is too stupid to understand things like "40G".
... you can limit the amount of ARC ZFS uses by setting vfs.zfs.arc_max in /etc/sysctl.conf.
vfs.zfs.arc_max="4G"
vfs.zfs.arc_max 4294967296