I used
The system is a remote hot spare and the jail is used only to run an isolated domain name service. The memory allocation when unloaded looks like this:
As this is
iocage export
to transfer a jail from one host to another host. The jail's export files were transferred using rsync
. However, I cannot get iocage
to successfully complete the import on the destination. This is apparently caused by running out of swap space.
Code:
Dec 13 12:20:11 vhost03 kernel: swap_pager: out of swap space
Dec 13 12:20:11 vhost03 kernel: swp_pager_getswapspace(16): failed
Dec 13 12:20:13 vhost03 kernel: pid 26142 (python3.9), jid 0, uid 0, was killed: failed to reclaim memory
Dec 13 16:00:20 vhost03 kernel: swap_pager: out of swap space
Dec 13 16:00:20 vhost03 kernel: swp_pager_getswapspace(24): failed
Dec 13 16:00:23 vhost03 kernel: pid 29796 (python3.9), jid 0, uid 0, was killed: failed to reclaim memory
Dec 13 16:00:24 vhost03 kernel: pid 10070 (sshd), jid 0, uid 0, was killed: failed to reclaim memory
Dec 13 16:00:24 vhost03 kernel: swap_pager: out of swap space
Dec 13 16:00:24 vhost03 kernel: swp_pager_getswapspace(20): failed
Dec 13 16:00:25 vhost03 kernel: swp_pager_getswapspace(17): failed
The system is a remote hot spare and the jail is used only to run an isolated domain name service. The memory allocation when unloaded looks like this:
Code:
last pid: 69321; load averages: 0.12, 0.11, 0.13 up 20+01:24:01 16:16:57
17 processes: 1 running, 16 sleeping
CPU: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
Mem: 8776K Active, 67M Inact, 40K Laundry, 399M Wired, 104K Buf, 3436M Free
ARC: 189M Total, 27M MFU, 149M MRU, 132K Anon, 1330K Header, 11M Other
149M Compressed, 216M Uncompressed, 1.45:1 Ratio
Swap: 2048M Total, 9456K Used, 2039M Free
As this is
iocage
the jail has a zfs dataset. I speculate that zfs send/receive
is buffering to the point of memory exhaustion. Is this what is actually causing the swap problem? If so, is there some way to limit the amount of memory iocage import
uses?