We have a FreeBSd-12.0p3 host that originally started life as version 10.2. It is used to host a number of BHyve vms, all but one of which are running FreeBSD-12.0. as well. INET09 is supposed to be a CentOS-6.9 system but will not boot into the installer. SAMBA-01 is a FreeBSD-10.2 system which we are not touching until the alternate DC SAMBA-02 is configured and joined to the domain.
The majority of our vms were created using this template:
Recently our zfs health scans have reported that we are at 80% utilisation. These are the current storage statistics:
Also, quite recently, we have had inet17, which hosts our cyrus-imap service, hang on several occasions. In every case there is nothing in the various logs (mesages/maillog/auth.log) on the vm nor anything in the vm log or in /var/log/messages on the host. I am casting about for possible contributing factors and my concern is that zfs might be one.
The host was originally installed using the guided zfs on root option. I can no longer recall how I configured BHyve when it was installed. The zpool arrangement is:
The host system was set up as encrypted zfs on root raidz2 with four (4) 2T hdds and has four (4) additional 2T drives in the chassis (8x2T total).
What is the suggested course of action?
Code:
vm list
NAME DATASTORE LOADER CPU MEMORY VNC AUTOSTART STATE
inet09 default grub 2 4G - No Locked (vhost03.hamilton.harte-lyne.ca)
inet13 default bhyveload 2 4G - Yes [1] Running (2018)
inet14 default bhyveload 2 4G - Yes [2] Running (2027)
inet16 default bhyveload 2 4G - Yes [3] Running (2863)
inet17 default bhyveload 2 4G - Yes [4] Running (80363)
inet18 default bhyveload 2 4G - Yes [5] Running (3129)
inet19 default bhyveload 2 4G - Yes [6] Running (3179)
samba-01 default bhyveload 2 4G - Yes [7] Running (3156)
samba-02 default bhyveload 2 4G - Yes [8] Running (45944)
The majority of our vms were created using this template:
Code:
loader="bhyveload"
cpu=2
memory=4G
utctime=yes
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0"
disk0_dev="sparse-zvol"
Recently our zfs health scans have reported that we are at 80% utilisation. These are the current storage statistics:
Code:
zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
bootpool 1.98G 283M 1.71G - - 15% 13% 1.00x ONLINE -
zroot 10.6T 8.52T 2.11T - - 55% 80% 1.00x ONLINE -
zfs list
NAME USED AVAIL REFER MOUNTPOINT
bootpool 283M 1.58G 280M /bootpool
zroot 4.33T 674G 140K /zroot
zroot/ROOT 30.1G 674G 140K none
zroot/ROOT/default 30.1G 674G 16.1G /
zroot/tmp 10.6M 674G 215K /tmp
zroot/usr 901M 674G 140K /usr
zroot/usr/home 140K 674G 140K /usr/home
zroot/usr/ports 900M 674G 900M /usr/ports
zroot/usr/src 140K 674G 140K /usr/src
zroot/var 154M 674G 140K /var
zroot/var/audit 140K 674G 140K /var/audit
zroot/var/crash 140K 674G 140K /var/crash
zroot/var/log 94.3M 674G 9.54M /var/log
zroot/var/mail 2.62M 674G 174K /var/mail
zroot/var/tmp 56.6M 674G 56.1M /var/tmp
zroot/vm 4.28T 674G 10.7G /zroot/vm
zroot/vm/inet09 157K 674G 157K /zroot/vm/inet09
zroot/vm/inet13 617G 674G 169K /zroot/vm/inet13
zroot/vm/inet13/disk0 617G 674G 185G -
zroot/vm/inet14 487G 674G 151K /zroot/vm/inet14
zroot/vm/inet14/disk0 487G 674G 99.7G -
zroot/vm/inet16 266G 674G 169K /zroot/vm/inet16
zroot/vm/inet16/disk0 266G 674G 65.4G -
zroot/vm/inet17 1.39T 674G 169K /zroot/vm/inet17
zroot/vm/inet17/disk0 1.39T 674G 391G -
zroot/vm/inet18 607G 674G 157K /zroot/vm/inet18
zroot/vm/inet18/disk0 607G 674G 113G -
zroot/vm/inet19 507G 674G 169K /zroot/vm/inet19
zroot/vm/inet19/disk0 507G 674G 209G -
zroot/vm/samba-01 198G 674G 169K /zroot/vm/samba-01
zroot/vm/samba-01/disk0 198G 674G 82.2G -
zroot/vm/samba-02 48.6G 674G 169K /zroot/vm/samba-02
zroot/vm/samba-02/disk0 48.6G 674G 25.9G -
zroot/vm/samba_dc01.img 210G 880G 3.34G
Also, quite recently, we have had inet17, which hosts our cyrus-imap service, hang on several occasions. In every case there is nothing in the various logs (mesages/maillog/auth.log) on the vm nor anything in the vm log or in /var/log/messages on the host. I am casting about for possible contributing factors and my concern is that zfs might be one.
The host was originally installed using the guided zfs on root option. I can no longer recall how I configured BHyve when it was installed. The zpool arrangement is:
Code:
zpool status
pool: bootpool
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0 in 0 days 00:00:09 with 0 errors on Thu Mar 28 16:22:10 2019
config:
NAME STATE READ WRITE CKSUM
bootpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p2 ONLINE 0 0 0
ada1p2 ONLINE 0 0 0
ada2p2 ONLINE 0 0 0
ada3p2 ONLINE 0 0 0
errors: No known data errors
pool: zroot
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub in progress since Sat May 4 02:45:00 2019
7.33T scanned at 24.7M/s, 7.29T issued at 24.6M/s, 8.54T total
0 repaired, 85.38% done, 0 days 14:47:41 to go
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
ada0p4.eli ONLINE 0 0 0
ada1p4.eli ONLINE 0 0 0
ada2p4.eli ONLINE 0 0 0
ada3p4.eli ONLINE 0 0 0
The host system was set up as encrypted zfs on root raidz2 with four (4) 2T hdds and has four (4) additional 2T drives in the chassis (8x2T total).
What is the suggested course of action?