My opinions only (feel free to disagree):
Memory should be used. Why have 32GB total and leave 16G free? Maybe in some use cases you want to set an upper limit on total used so root can get in and do stuff if needed, but at max 10-15% free, so on your 32GB maybe try and use no more than 24-28G.
BUT (here's the big caveat):
It depends on what the system is being used for.
A general purpose workstation, graphical environment, users, I would bias towards "user experience" implies leaving more free for applications.
Servers with no users? Let the system use all the free memory to buffer files.
How does this apply to ZFS?
By default, ZFS wants to use all your free memory to buffer. That's the ARC. Something reads a file, well, ZFS wants to do read ahead buffering and keep stuff in RAM (ARC is similar to typical file system buffers). Why do that? Well it's a heck of a lot faster to pull data from RAM than it is from a device (hard disk, SSD, NVME).
But what stays cached in ARC depends on the usage patterns (hence my distinction between servers and user workstations).
So what is
Alain De Vos doing with those sysctls? Limiting the size of the ARC for ZFS; telling the OS: use at least this much (arc_min) but no more than that much (arc_max). It would leave the rest of RAM available for applications, which if a user workstation gives more RAM for applications.
For me, on systems that are primarily user workstations I set arc_max to somewhere around 4G, simply because the access patterns of files on the disks are intermittent.
On systems that serve files (say something that is the backing store for streamed video files) I don't set anything and let the system figure it out. Streaming files from RAM is faster than streaming from physical device.
BTW: I believe those values should be set in /boot/loader.conf, I could be wrong, but that's where I've always set them.
pkg install zfs-stats and get familiar with the info it provides.