Unexplained Memory Usage

I've been intermittently attempting to solve (or at least explain) this for about a week, and I'm running out of ideas so I was hoping to get some help.

The problem: Upon a fresh boot, the system utilizes about ~50 MB of memory (presumably allocated by the kernel). After a period of 2 hours or so, memory usage jumps up to somewhere between 180 MB and 230 MB, but no processes seem so be using this memory.

I'll provide as much detailed information as I can. I've disabled most services to try and hunt down the issue.

System Information
FreeBSD 10.2-RELEASE-p9
CPU: 2x Virtual CPU e7da7129d3ee (2398.86-MHz K8-class CPU) (amd64)
Real Mem: 2048 MB
Avail Mem: 1955 MB
Hypervisor: KVM

/etc/rc.conf
Code:
# Hostname
hostname="*redacted*"

# Networking
ifconfig_vtnet0="dhcp"
ifconfig_vtnet0_ipv6="inet6 accept_rtadv"
ipv6_activate_all_interfaces=YES
route_linklocal="-net 169.254.0.0/16 -interface vtnet0"
rtsold_enable=YES
rtsold_flags="-aF"
static_routes=linklocal

# Enabled Services
sshd_enable="YES"

# Disabled Services
sendmail_enable="NO"
sendmail_msp_queue_enable="NO"
sendmail_outbound_enable="NO"
sendmail_submit_enable="NO"
I have also emptied the contents of /etc/sysctl.conf and /boot/loader.conf.

Memory Usage (Various Sources)
Code:
# top
Mem: 2900K Active, 212M Inact, 183M Wired, 1984K Cache, 212M Buf, 1572M Free
Code:
# freecolor -m -o
             total       used       free     shared    buffers     cached
Mem:          1971        185       1786          0          0          1
Code:
# perl freebsd-memory.pl
SYSTEM MEMORY INFORMATION:
mem_wire:         191602688 (    182MB) [  9%] Wired: disabled for paging out
mem_active:  +      3657728 (      3MB) [  0%] Active: recently referenced
mem_inactive:+    226435072 (    215MB) [ 10%] Inactive: recently not referenced
mem_cache:   +      2031616 (      1MB) [  0%] Cached: almost avail. for allocation
mem_free:    +   1643261952 (   1567MB) [ 79%] Free: fully available for allocation
mem_gap_vm:  +         4096 (      0MB) [  0%] Memory gap: UNKNOWN
-------------- ------------ ----------- ------
mem_all:     =   2066993152 (   1971MB) [100%] Total real memory managed
mem_gap_sys: +     55013376 (     52MB)        Memory gap: Kernel?!
-------------- ------------ -----------
mem_phys:    =   2122006528 (   2023MB)        Total real memory available
mem_gap_hw:  +     25477120 (     24MB)        Memory gap: Segment Mappings?!
-------------- ------------ -----------
mem_hw:      =   2147483648 (   2048MB)        Total real memory installed

SYSTEM MEMORY SUMMARY:
mem_used:         275755008 (    262MB) [ 12%] Logically used memory
mem_avail:   +   1871728640 (   1785MB) [ 87%] Logically available memory
-------------- ------------ ----------- ------
mem_total:   =   2147483648 (   2048MB) [100%] Logically total memory
Process Information
Code:
# ps -duxm
USER  PID  %CPU %MEM   VSZ  RSS TT  STAT STARTED       TIME COMMAND
root    0   0.0  0.0     0  208  -  DLs  Sat11AM    0:00.45 [kernel]
root    1   0.0  0.0  9476  852  -  ILs  Sat11AM    0:00.02 - /sbin/init --
root  364   0.0  0.2 13624 4712  -  Is   Sat11AM    0:00.18 |-- /sbin/devd
root  646   0.0  0.3 61224 6572  -  Is   Sat11AM    0:00.01 |-- /usr/sbin/sshd
root 8021   0.0  0.3 86492 7164  -  Ss    5:41PM    0:00.19 | `-- sshd: root@pts/0 (sshd)
root 8023   0.0  0.2 23592 3364  0  Ss    5:41PM    0:00.05 |   `-- -tcsh (tcsh)
root 8129   0.0  0.1 18760 2168  0  R+    5:51PM    0:00.00 |     `-- ps -duxm
root  458   0.0  0.1 14512 2100  -  Ss   Sat11AM    0:00.77 |-- /usr/sbin/syslogd -s
root  650   0.0  0.1 16612 2228  -  Ss   Sat11AM    0:01.20 |-- /usr/sbin/cron -s
root  413   0.0  0.1 14456 1856  -  Is   Sat11AM    0:00.01 |-- /usr/sbin/rtsold -aF
root  696   0.0  0.1 47744 2792 v0  Is   Sat11AM    0:00.05 `-- login [pam] (login)
root 7968   0.0  0.2 23592 3368 v0  I+    5:40PM    0:00.02   `-- -tcsh (tcsh)
root    2   0.0  0.0     0   32  -  DL   Sat11AM    0:00.85 - [cam]
root    3   0.0  0.0     0   16  -  DL   Sat11AM    0:00.00 - [sctp_iterator]
root    4   0.0  0.0     0   32  -  DL   Sat11AM    0:01.52 - [pagedaemon]
root    5   0.0  0.0     0   16  -  DL   Sat11AM    0:00.00 - [vmdaemon]
root    6   0.0  0.0     0   16  -  DL   Sat11AM    0:00.01 - [pagezero]
root    7   0.0  0.0     0   32  -  DL   Sat11AM    0:03.61 - [bufdaemon]
root    8   0.0  0.0     0   16  -  DL   Sat11AM    0:01.04 - [vnlru]
root    9   0.0  0.0     0   16  -  DL   Sat11AM    1:27.97 - [syncer]
root   10   0.0  0.0     0   16  -  DL   Sat11AM    0:00.00 - [audit]
root   11 199.2  0.0     0   32  -  RL   Sat11AM 6542:13.09 - [idle]
root   12   0.0  0.0     0  304  -  WL   Sat11AM    1:36.44 - [intr]
root   13   0.0  0.0     0   48  -  DL   Sat11AM    0:00.01 - [geom]
root   14   0.0  0.0     0   16  -  DL   Sat11AM    0:30.97 - [rand_harvestq]
root   15   0.0  0.0     0   64  -  DL   Sat11AM    0:01.80 - [usb]
root  375   0.0  0.0     0   16  -  DL   Sat11AM    0:41.28 - [pf purge]
root  627   0.0  0.0     0   16  -  DL   Sat11AM    0:00.00 - [md99]
Things I've Tried
  • Rebooting
  • Scanning for rootkits with rkhunter
  • Disabling most processes (including rtsold, syslog, cron, and ssh)
  • Accounting for the memory usage
  • Disabling networking
  • Sifting through log files for signs of malicious logins
I'd greatly appreciate any help that can be provided.
 
You say that, but I don't experience similar memory usage on other OSes in the same vm container (I've tried OpenBSD and Debian).

Thanks for the quick response though =]
 
You say that, but I don't experience similar memory usage on other OSes in the same vm container (I've tried OpenBSD and Debian).

They have different development history and origins, you are comparing apples (no pun intended :p) and oranges. You need to take a very close look at what vmstat -z (or could be vmstat -m) outputs and decide if there is indeed some memory leak or if the odd numbers are just a result of the method of bookkeeping.
 
They have different development history and origins, you are comparing apples (no pun intended :p) and oranges. You need to take a very close look at what vmstat -z (or could be vmstat -m) outputs and decide if there is indeed some memory leak or if the odd numbers are just a result of the method of bookkeeping.

I'm aware of the markable differences, but I was alluding purely to what the VM container was doing, not the underlying OS. Obviously I can't compare the memory usage of two different OSes.

Thanks for vmstat - don't know why I hadn't thought of that. Here's the output, which only accounts for about 13MB. I need to read up on vmstat -z, and I'll get back to you on that when I understand it.
vmstat -m | sort -n -k3 | awk '{SUM += $3 } END { print SUM/1024 }' -> 13MB
Code:
# vmstat -m | sort -n -k3   # entries < 2K excluded from paste
          MCA    10     2K       -       10  128
         cdev     5     2K       -        5  256
         pgrp    13     2K       -      478  128
        lockf    14     2K       -     7901  64,128
       Unitno    26     2K       -    29193  32,64
       isadev     9     2K       -        9  128
      acpidev    21     2K       -       21  64
      io_apic     1     2K       -        1  2048
      md_disk     1     2K       -        1  2048
      session    11     2K       -       75  128
       USBdev    14     3K       -       14  32,64,128,256,512
      CAM SIM     3     1K       -        3  256
      acpisem    24     3K       -       24  128
      lltable     8     3K       -       20  256,512
      uidinfo     3     3K       -       63  128,2048
     pf_ifnet     5     3K       -       13  256,2048
    in6_multi    25     3K       -       25  32,256
  ether_multi    40     3K       -       52  16,32,64
         cred    24     4K       -   378686  64,256
       select    27     4K       -       27  128
       sglist     3     4K       -        3  32,1024,2048
      CAM DEV     4     8K       -       12  2048
      memdesc     1     4K       -        1  4096
    taskqueue    21     4K       -       21  16,32,256
   CAM periph     4     1K       -       18  16,32,64,128,256
        ifnet     3     5K       -        3  128,2048
     CAM path     5     1K       -       42  32
    pfs_nodes    21     6K       -       21  256
audit_evclass   187     6K       -      229  32
     routetbl    40     7K       -      241  32,64,128,256,512
    CAM queue     7     3K       -       32  16,32,512
        shmfd     1     8K       -        1  8192
    bmsafemap     1     8K       -    52385  256,8192
      UMAHash     4     9K       -       12  512,1024,2048,4096
      devstat     4     9K       -        4  32,4096
eventhandler   107     9K       -      107  64,128
         GEOM    59    11K       -      391  16,32,64,128,256,512,1024,2048,8192,16384
         kenv    75    11K       -       79  16,32,64,128,8192
         rman    88    11K       -      489  32,128
      pf_rule    11    11K       -       11  1024
       KTRACE   100    13K       -      100  128
       ifaddr    33    13K       -       34  32,64,128,256,512,2048,4096
      ithread    77    13K       -       77  32,128,256
    ufs_mount     3    13K       -        3  512,4096,8192
          rpc     2    16K       -        2  8192
         proc     2    16K       -        2  8192
      tidhash     1    16K       -        1  16384
     acpitask     1    16K       -        1  16384
     mtx_pool     2    16K       -        2  8192
          BPF    10    18K       -       10  128,512,4096
          tty    18    18K       -       19  1024
      CAM XPT    18     2K       -       34  32,64,128,512,1024,2048
       kbdmux     6    22K       -        6  16,512,1024,2048,16384
       DEVFS3    91    23K       -      105  256
      kdtrace   126    26K       -    46414  64,256
   DEVFS_RULE    55    26K       -       55  64,512
          msg     4    30K       -        4  2048,4096,8192,16384
         umtx   234    30K       -      234  128
          shm     1    32K       -        1  32768
    hostcache     1    32K       -        1  32768
         vmem     3    38K       -        7  1024,2048,4096,8192,32768
       DEVFS1    75    38K       -       79  512
       module   486    61K       -      486  128
     filedesc    31    62K       -    43644  2048
     syncache     1    64K       -        1  65536
      entropy  1027    65K       -     1986  32,64,4096
  ufs_dirhash   243    75K       -      357  16,32,64,128,256,512,1024,2048,8192,16384
          bus  1011    82K       -     3181  16,32,64,128,256,1024
       acpica   892    96K       -    27510  16,32,64,128,256,512,1024,2048,8192
    sysctloid  1916    96K       -     1983  16,32,64,128
       bus-sc    38   101K       -     1636  16,32,64,128,256,512,1024,2048,4096,8192,16384,32768,65536
          USB    14   106K       -       15  16,128,512,4096,32768,65536
          sem     4   106K       -        4  2048,4096
      pf_osfp  1130   115K       -     1130  64,128
      pagedep     1   128K       -    17144  256
      subproc    86   152K       -    43699  512,4096
       linker   181   177K       -      194  16,32,64,128,256,512,1024,2048,4096,8192,16384
    vm_pgdata     2   257K       -        2  128
         SWAP     2   277K       -        2  64
          pcb    20   407K       -      141  16,32,128,1024,2048,8192
     inodedep     1   512K       -   100792  512
     vfs_hash     1   512K       -        1
    ufs_quota     1   512K       -        1
       newblk     1  1024K       -    63213  256
     vfscache     1  1024K       -        1
       devbuf  1236  1074K       -     1309  16,32,64,128,256,512,1024,2048,4096,8192,16384,65536
         kobj   330  1320K       -      464  4096
      callout     3  1672K       -        3
      pf_hash     3  2880K       -        3
 
Two reasons: (1) I'm curious! I've had trouble finding much information about how FreeBSD's kernel allocates memory. (2) The memory seems to ramp over time. After a week or so, it will climb another 100MB. I'd like to fix the problem so that I can be assured that the server is stable.

Not sure how netstat helps, I did mention this problem persists with networking disabled. Regardless,
Code:
# netstat -m
258/1527/1785 mbufs in use (current/cache/total)
256/764/1020/126160 mbuf clusters in use (current/cache/total/max)
256/756 mbuf+clusters out of packet secondary zone in use (current/cache)
0/37/37/63079 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/18690 9k jumbo clusters in use (current/cache/total/max)
0/0/0/10513 16k jumbo clusters in use (current/cache/total/max)
576K/2057K/2634K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
The "Max" column on mbufs seems possibly related - assuming this output is in KB, 126,160KB is on the same order of magnitude of the behavior I'm seeing.
 
Kiiski - I'll go through that, thanks.

I'd appreciate any other tips on approaching debugging this problem - I still haven't found the cause.
 
Back
Top