UFS Gstat and disk I / O

Hi,
I'm becoming crazy on a problem with performance on a FreeBSD 10.3 on a proxmox VPS. How can I debug this kind of problem and understand the process that is writing so much? The VPS seems incredibly slow and not reactive.

Code:
dT: 1.003s  w: 1.000s
 L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
    0      0      0      0    0.0      0      0    0.0    0.0| cd0
   41     10      0      0    0.0     10   1276   2090  121.4| ada0
    0      0      0      0    0.0      0      0    0.0    0.0| ada0p1
   41     10      0      0    0.0     10   1276   2090  121.4| ada0p2
    0      0      0      0    0.0      0      0    0.0    0.0| ada0p3
    0      0      0      0    0.0      0      0    0.0    0.0| gptid/490ad9ec-fab0-11e6-a987-ed324707a782

I tried "top + m" whitout success

Code:
last pid: 11213;  load averages:  1.51,  0.91,  0.76                                                          up 90+09:51:28  22:50:12
52 processes:  1 running, 51 sleeping
CPU:  0.0% user,  0.0% nice,  0.4% system,  0.2% interrupt, 99.4% idle
Mem: 83M Active, 111M Inact, 505M Wired, 18M Cache, 835M Buf, 3224M Free
Swap: 1638M Total, 518M Used, 1120M Free, 31% Inuse

  PID USERNAME     VCSW  IVCSW   READ  WRITE  FAULT  TOTAL PERCENT COMMAND
11211 root            0      0      0      0      0      0   0.00% php
30546 redis          19      0      0      0      0      0   0.00% redis-server
32963 mysql           9      0      0      0      0      0   0.00% mysqld
20856 nobody         21      0      0      0      0      0   0.00% memcached
45084 root            0      0      0      0      0      0   0.00% supervise
45082 root            2      0      0      0      0      0   0.00% svscan
29992 nobody         21      0      0      0      0      0   0.00% memcached
30673 root            2      0      0      0      0      0   0.00% php-fpm
30089 nobody         21      0      0      0      0      0   0.00% memcached
25434 www             4      0      0      0      0      0   0.00% nginx
 2002 root            0      0      0      0      0      0   0.00% cron
64645 root            0      0      0      0      0      0   0.00% syslog-ng
25435 www             0      0      0      0      0      0   0.00% nginx
30122 nobody          0      0      0      0      0      0   0.00% node
 1998 root            0      0      0      0      0      0   0.00% sshd
87900 root            1      0      0      0      0      0   0.00% bash
  315 root            0      0      0      0      0      0   0.00% devd
30157 nobody          0      0      0      0      0      0   0.00% node
10436 www             0      0      0      0      0      0   0.00% php-fpm
10667 www             0      0      0      0      0      0   0.00% php-fpm
10666 www             0      0      0      0      0      0   0.00% php-fpm
30205 nobody          0      0      0      0      0      0   0.00% node
45083 root            0      0      0      0      0      0   0.00% readproctitle
 9016 andrea          2      0      0      0      0      0   0.00% sshd
 9246 root            0      0      0      0      0      0   0.00% csh
87834 root            0      0      0      0      0      0   0.00% <csh>
 8888 root            0      0      0      0      0      0   0.00% sshd
11185 root            2      0      0      0      0      0   0.00% top
32847 mysql           0      0      0      0      0      0   0.00% <sh>
 9125 root            0      0      0      0      0      0   0.00% sudo
 
Have you looked at the Proxmox side of things to see whether this is a host-side issue (ie, all VMs are slow) and not just a VM-side issue?

Are you using virtio drivers in the FreeBSD VM for storage (and network)? Or emulating a SATA/SCSI/IDE device?
 
Have you looked at the Proxmox side of things to see whether this is a host-side issue (ie, all VMs are slow) and not just a VM-side issue?

Are you using virtio drivers in the FreeBSD VM for storage (and network)? Or emulating a SATA/SCSI/IDE device?

I'm not the administrator of proxmox. I can connect to the vps only via ssh. Anyway i attach the graphic of disk performance that was send to me from the sysadmin of proxmox system, in thursday we did not make any changes but also on they side they said the same thing.

"Have you looked at the Proxmox side of things to see whether this is a host-side issue (ie, all VMs are slow) and not just a VM-side issue?"

I'll ask to the sysadmin of proxmox, but, there is a way to understand from VM-side if there is a process that massively use the disk?

Sorry for my bad english, thanks in advance for your support.
 

Attachments

  • Schermata 2017-09-26 alle 09.47.57(1).png
    Schermata 2017-09-26 alle 09.47.57(1).png
    159.1 KB · Views: 655
Looking at the disk statistics your ada0p2 partition is 121% busy, which means it's constantly thrashing. Assuming a standard install ada0p2 is probably your swap. So it looks like your VPS is swapping a lot at certain times. Swap usage in and of itself is not a problem, excessive swapping however will make everything very slow.
 
Looking at the disk statistics your ada0p2 partition is 121% busy, which means it's constantly thrashing. Assuming a standard install ada0p2 is probably your swap. So it looks like your VPS is swapping a lot at certain times. Swap usage in and of itself is not a problem, excessive swapping however will make everything very slow.

This is my /etc/fstab

Code:
root@localhost:/usr/home/ # cat /etc/fstab
# Device        Mountpoint      FStype  Options Dump    Pass#
/dev/ada0p2     /               ufs     rw      1       1
/dev/ada0p3     none            swap    sw      0       0
fdescfs /dev/fd         fdescfs         rw,late 0       0

So, ada0p2 is my / and adaop3 is my swap

I've found a lot of this entry on dmesg log:

Code:
pid 64599 (influxd), uid 907, was killed: out of swap space
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 1566, size: 4096
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 392536, size: 4096
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 392507, size: 16384
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 392934, size: 32768
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 392976, size: 40960
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 332487, size: 8192
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 393171, size: 20480
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 393510, size: 8192
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 393578, size: 4096
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 110088, size: 32768
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 110120, size: 32768
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 216094, size: 8192
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 110184, size: 32768
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 131098, size: 8192
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 16479, size: 4096
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 144290, size: 32768
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 106706, size: 4096
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 405882, size: 4096
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 406263, size: 8192
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 149424, size: 4096
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 7739, size: 20480
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 11248, size: 16384
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 406446, size: 28672
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 144644, size: 32768
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 144652, size: 32768
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 144978, size: 32768
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 1303, size: 32768
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 214508, size: 36864
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 214533, size: 36864
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 232169, size: 28672
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 233928, size: 32768
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 235306, size: 32768
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 202644, size: 32768
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 206176, size: 32768
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 326848, size: 32768
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 12535, size: 36864
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 13138, size: 32768
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 13568, size: 40960
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 12750, size: 36864
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 8981, size: 4096
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 211501, size: 16384
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 402406, size: 4096
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 7726, size: 4096
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 137558, size: 32768
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 406962, size: 8192
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 101467, size: 4096
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 22644, size: 4096
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 22659, size: 4096
swap_pager: indefinite wait buffer: bufobj: 0, blkno: 22681, size: 4096

Now influxd is down however, the situation has not improved...
 
This is the output of vmstat 1

Code:
procs      memory      page                    disks     faults         cpu
 r b w     avm    fre   flt  re  pi  po    fr  sr ad0 cd0   in   sy   cs us sy id
 0 1 9   4835M  3145M   124   0   0   0   393  54  16   0   58  467  349  0  0 100
 0 1 9   4835M  3147M  5123   0   1   0  7680  45  34   0   73 6799  459  5  4 91
 0 1 9   4835M  3145M     0   0   0   0     0  45  21   0   32  346  292  0  1 99
 0 1 9   4835M  3145M     0   0   0   0     0  45  14   0   20  359  221  0  0 100
 0 1 9   4823M  3145M    67   0   0   0   391  45  21   0   29  423  265  0  0 100
 0 1 9   4823M  3145M     0   0   0   0     1  45  17   0   42  360  297  0  1 99
 0 1 9   4596M  3165M    39   0   1   0  5025  36   8   0   56  627  305  0  1 99
 0 1 9   4864M  3146M  5276   0   2   0  2675  45  20   0  104 6910  515  6  3 91
 0 1 9   4864M  3144M     0   0   0   0     0  45  20   0  115  366  440  0  1 99
 0 1 9   4864M  3144M   120   0   0   0   390  45  30   0  128  438  507  0  1 99
 0 1 9   4864M  3147M   258   0   1   0  2592  45  25   0  119  395  469  0  1 99
 0 1 9   4864M  3144M     0   0   0   0     0  44  38   0  129  376  534  0  3 97
 0 1 9   4864M  3143M     1   0   0   0     0  44  19   0   87  366  381  0  2 98
 0 1 9   4864M  3143M     0   0   0   0     8  45  35   0   89  339  435  0  2 98
 0 1 9   4864M  3146M  5251   0   3   0  8121  44  74   0  133 6865  679  5  6 89
 0 1 9   4864M  3144M     0   0   0   0     0  44  20   0   49  347  310  0  1 99
 0 1 9   4864M  3144M   119   0   0   0   391  44  20   0   38  476  305  0  0 100
 0 2 9   4864M  3144M    38   0   1   0     0  44  21   0   43  548  372  1  1 98
 0 1 9   4864M  3146M   288   0   1   0  2600  45  35   0   46  375  340  0  1 99
 0 1 9   4636M  3163M   133   0   0   0  5381  36  38   0   55  709  391  1  1 98
 0 1 9   4636M  3163M     0   0   0   0     0  36  27   0   83 3052  424  2  2 96
 0 1 9   4864M  3144M  5108   0   1   0  2689  45  40   0  112 6532  541  6  4 90
 0 1 9   4864M  3144M   124   0   0   0   393  45  39   0  114  470  498  0  0 100
 0 1 9   4864M  3146M   258   0   1   0  2592  45  21   0  122 2855  498  2  1 96
 0 1 9   4864M  3144M     0   0   0   0     0  44  25   0  118  363  458  0  2 98
 0 1 9   4864M  3144M   119   0   0   0   391  44  21   0  103  460  446  0  0 100
 
So, ada0p2 is my / and adaop3 is my swap
Ah, right. So it's not the swap (although the swap_pager messages do indicate memory/swap issues). The disk I/O graphs show there was a lot of writing to disk happening from Thursday until Monday. Did you start something around that Thursday, just before the write peak?
 
Ah, right. So it's not the swap (although the swap_pager messages do indicate memory/swap issues). The disk I/O graphs show there was a lot of writing to disk happening from Thursday until Monday. Did you start something around that Thursday, just before the write peak?

No.. this is the problem. We didn't do anything on Thursday, from that day to now the vps seems incredible slow and is almost impossible to do anything from command line. My question is how can I be sure that problem is on PROXMOX-SIDE and not on VM-SIDE?
 
My question is how can I be sure that problem is on PROXMOX-SIDE and not on VM-SIDE?
I'm more leaning towards an issue on the VPS itself. At some point around Thurday it started a massive amount of writing, basically choking the I/O. The swap_pager messages show it's been swapping a lot too and eventually it killed the influxd process because it ran out of memory. There would probably still be a lot of data locked up in buffers it's trying to write to disk. So it's going to take a while for the system to settle down again.

At this point I'm guessing things have gotten pretty messed up internally, so I would probably just reboot the machine. Just to make sure it's all "fresh" and clean again. Then I'd watch the system like a hawk. You might want to install some monitoring to keep a constant eye on things. With some good monitoring you might be able to catch it going off the rails before it burns down.
 
I'm more leaning towards an issue on the VPS itself. At some point around Thurday it started a massive amount of writing, basically choking the I/O. The swap_pager messages show it's been swapping a lot too and eventually it killed the influxd process because it ran out of memory. There would probably still be a lot of data locked up in buffers it's trying to write to disk. So it's going to take a while for the system to settle down again.

At this point I'm guessing things have gotten pretty messed up internally, so I would probably just reboot the machine. Just to make sure it's all "fresh" and clean again. Then I'd watch the system like a hawk. You might want to install some monitoring to keep a constant eye on things. With some good monitoring you might be able to catch it going off the rails before it burns down.

I rebooted the VM but without good results. The vm still incredibly slow and i stopped influx at boot. so.. what can i check now?
 
You can try running something like iostat -x 1 10. That will give you 10 statistics a second apart. Take note of the %b value, that's the busy indication of I/O. Anything above 80% indicates there's a lot of I/O happening. Small peaks are fine (1 measurement above and the next measurement a low value) but consistently and continuously above 80% means there's something really pushing the I/O. See if you can correlate that with the output of top -Sm io.
 
You can try running something like iostat -x 1 10. That will give you 10 statistics a second apart. Take note of the %b value, that's the busy indication of I/O. Anything above 80% indicates there's a lot of I/O happening. Small peaks are fine (1 measurement above and the next measurement a low value) but consistently and continuously above 80% means there's something really pushing the I/O. See if you can correlate that with the output of top -Sm io.

After reboot the situation seems a little better but sometimes the system turn very slow like before reboot. I've tried to run iostat -x 1 10
Code:
                        extended device statistics
device     r/s   w/s    kr/s    kw/s qlen svc_t  %b
ada0       1.3   3.0    24.5   137.8    0 2657.2  41
cd0        0.0   0.0     0.0     0.0    0   0.1   0
pass0      0.0   0.0     0.0     0.0    0   0.0   0
pass1      0.0   0.0     0.0     0.0    0   0.0   0
                        extended device statistics
device     r/s   w/s    kr/s    kw/s qlen svc_t  %b
ada0       0.0   0.0     0.0     0.0    0   0.0   0
cd0        0.0   0.0     0.0     0.0    0   0.0   0
pass0      0.0   0.0     0.0     0.0    0   0.0   0
pass1      0.0   0.0     0.0     0.0    0   0.0   0
                        extended device statistics
device     r/s   w/s    kr/s    kw/s qlen svc_t  %b
ada0       0.0   0.0     0.0     0.0    0   0.0   0
cd0        0.0   0.0     0.0     0.0    0   0.0   0
pass0      0.0   0.0     0.0     0.0    0   0.0   0
pass1      0.0   0.0     0.0     0.0    0   0.0   0
                        extended device statistics
device     r/s   w/s    kr/s    kw/s qlen svc_t  %b
ada0       0.0   0.0     0.0     0.0    0   0.0   0
cd0        0.0   0.0     0.0     0.0    0   0.0   0
pass0      0.0   0.0     0.0     0.0    0   0.0   0
pass1      0.0   0.0     0.0     0.0    0   0.0   0
                        extended device statistics
device     r/s   w/s    kr/s    kw/s qlen svc_t  %b
ada0       0.0   0.0     0.0     0.0    0   0.0   0
cd0        0.0   0.0     0.0     0.0    0   0.0   0
pass0      0.0   0.0     0.0     0.0    0   0.0   0
pass1      0.0   0.0     0.0     0.0    0   0.0   0
                        extended device statistics
device     r/s   w/s    kr/s    kw/s qlen svc_t  %b
ada0       0.0   1.0     0.0     0.5    0  45.2   5
cd0        0.0   0.0     0.0     0.0    0   0.0   0
pass0      0.0   0.0     0.0     0.0    0   0.0   0
pass1      0.0   0.0     0.0     0.0    0   0.0   0
                        extended device statistics
device     r/s   w/s    kr/s    kw/s qlen svc_t  %b
ada0       0.0   0.0     0.0     0.0    0   0.0   0
cd0        0.0   0.0     0.0     0.0    0   0.0   0
pass0      0.0   0.0     0.0     0.0    0   0.0   0
pass1      0.0   0.0     0.0     0.0    0   0.0   0
                        extended device statistics
device     r/s   w/s    kr/s    kw/s qlen svc_t  %b
ada0       0.0   0.0     0.0     0.0    0   0.0   0
cd0        0.0   0.0     0.0     0.0    0   0.0   0
pass0      0.0   0.0     0.0     0.0    0   0.0   0
pass1      0.0   0.0     0.0     0.0    0   0.0   0

and after top -Sm io
Code:
last pid: 11117;  load averages:  1.70,  1.97,  1.87                                                                                                                  up 0+04:23:37  16:30:35
57 processes:  2 running, 54 sleeping, 1 waiting
CPU:  0.0% user,  0.0% nice,  0.4% system,  0.2% interrupt, 99.4% idle
Mem: 69M Active, 956M Inact, 295M Wired, 507M Buf, 2621M Free
Swap: 1638M Total, 1638M Free

  PID USERNAME     VCSW  IVCSW   READ  WRITE  FAULT  TOTAL PERCENT COMMAND
   11 root           16    260      0      0      0      0   0.00% idle
    9 root            2      0      0      1      0      1 100.00% syncer
    0 root          178      0      0      0      0      0   0.00% kernel
   12 root           53      0      0      0      0      0   0.00% intr
11116 root            0      0      0      0      0      0   0.00% php
    7 root            6      0      0      0      0      0   0.00% bufdaemon
    2 root            1      0      0      0      0      0   0.00% cam
  591 redis          19      1      0      0      0      0   0.00% redis-server
 3813 mysql          11      0      0      0      0      0   0.00% mysqld
 2130 www             0      0      0      0      0      0   0.00% php-fpm
 4302 www             0      0      0      0      0      0   0.00% php-fpm
 1992 root            0      0      0      0      0      0   0.00% cron
 4301 www             0      0      0      0      0      0   0.00% php-fpm
  595 root            2      0      0      0      0      0   0.00% php-fpm
  577 root            2      0      0      0      0      0   0.00% svscan
  408 root            1      0      0      0      0      0   0.00% syslog-ng
  329 root           19      0      0      0      0      0   0.00% pf purge
  602 www             0      0      0      0      0      0   0.00% nginx
  604 www             4      0      0      0      0      0   0.00% nginx
   14 root           19      0      0      0      0      0   0.00% rand_harvestq
  583 root            0      0      0      0      0      0   0.00% supervise
    4 root            2      0      0      0      0      0   0.00% pagedaemon

The only strange things I can found is.. sometimes syncer and bufdaemon have 100% of fault but i can't understand if this can be the problem..
 
The syncher and bufdaemon will peak every now and then. That's normal behavior. This, for example, is the output from one of our web servers (real hardware,dual Xeon, fast SAS/SATA controller):
Code:
last pid: 19055;  load averages:  1.30,  1.54,  1.64                                                       up 56+03:48:12  17:04:23
110 processes: 2 running, 107 sleeping, 1 waiting
CPU:  5.8% user,  0.0% nice,  0.2% system,  0.1% interrupt, 93.9% idle
Mem: 9547M Active, 50G Inact, 2725M Wired, 282M Cache, 1660M Buf, 172M Free
Swap: 4096M Total, 15M Used, 4080M Free

  PID USERNAME     VCSW  IVCSW   READ  WRITE  FAULT  TOTAL PERCENT COMMAND
   11 root          184   8154      0      0      0      0   0.00% idle
12897 www           623     11      0      2      0      2   4.88% ruby23
12902 www           362     40      0      1      0      1   2.44% ruby23
12907 www           370     34      0      1      0      1   2.44% ruby23
12914 www           125     36      0      0      0      0   0.00% ruby23
12998 www            84     14      0      0      0      0   0.00% ruby23
12921 www            59     10      0      0      0      0   0.00% ruby23
12927 www            57      0      0      1      0      1   2.44% ruby23
13003 www            28      0      0      0      0      0   0.00% ruby23
    7 root            2      3      0      0      0      0   0.00% pagedaemon
   12 root         5430      2      0      0      0      0   0.00% intr
   17 root            2      0      0     32      0     32  78.05% syncer
82027 root          556     47      0      0      0      0   0.00% syslog-ng
   14 root           16      0      0      0      0      0   0.00% rand_harvestq
   13 root           52      0      0      0      0      0   0.00% geom
    2 root           82      0      0      0      0      0   0.00% cam
    6 root           96      0      0      0      0      0   0.00% g_raid Intel-e4205f
   15 root           44      0      0      0      0      0   0.00% usb
   16 root            7      0      0      4      0      4   9.76% bufdaemon
  808 root            6      0      0      0      0      0   0.00% zfskern
85275 zabbix          2      0      0      0      0      0   0.00% zabbix_agentd
12932 www             0      0      0      0      0      0   0.00% ruby23
12937 www             0      0      0      0      0      0   0.00% ruby23
12748 www           306      1      0      0      0      0   0.00% PassengerAgent
85281 zabbix          2      0      0      0      0      0   0.00% zabbix_agentd
12942 www             0      0      0      0      0      0   0.00% ruby23
    0 root            1      0      0      0      0      0   0.00% kernel
12947 www             0      0      0      0      0      0   0.00% ruby23
12952 www             0      0      0      0      0      0   0.00% ruby23
12964 www             0      0      0      0      0      0   0.00% ruby23
12959 www             0      0      0      0      0      0   0.00% ruby23
12974 www             0      0      0      0      0      0   0.00% ruby23
  752 root            0      0      0      0      0      0   0.00% sshd
   18 root            2      0      0      0      0      0   0.00% vnlru
85276 zabbix          0      0      0      0      0      0   0.00% zabbix_agentd
 
Perfect! Now it work all quite good ( after reboot and whithout influx ) but sometimes the system still very slow and gstat "tell me" this:

Code:
dT: 1.002s  w: 1.000s
 L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
    0      0      0      0    0.0      0      0    0.0    0.0| cd0
   76      1      0      0    0.0      1     32  11988  432.5| ada0
    0      0      0      0    0.0      0      0    0.0    0.0| ada0p1
   76      1      0      0    0.0      1     32  11988  432.5| ada0p2
    0      0      0      0    0.0      0      0    0.0    0.0| ada0p3
    0      0      0      0    0.0      0      0    0.0    0.0| gptid/490ad9ec-fab0-11e6-a987-ed324707a782

The peak on ada0p2 and ada0 remains and tey're value go up / down every 1/2 second despite iostat -x 1 10 and top -Sm io didn't have any strange value.


I can suppose they change the storage of vps on thursday? Because before this day it work all good for month without problem and also with influx, grafa and telegraf working.
 
Back
Top