ZM on CentOS 7 ESXi VM getting disk full
Posted: Thu Mar 02, 2017 5:06 pm
I've been running into this issue from time to time. The VM will stop without space and I have to restore a snapshot to get it going.
My ZM VM has a 1TiB VHD and the PurgeWhenFull filter is running and set to 89% of the disk space. I even set a cron job to alert me if the disk space utilization went above 90% and runs every hour, but didn't get any alert. It seems that it might have happened within the hour, between cron runs, as I've tested the script for a full day with a low threshold and got alerts.
Today, after couple hours running, my partitions shows:
So I'm using 11% of my root partition - I customized the partitions to allocate the space to root and not need to move the storage.
Is there anything in the logs that would help me fix this problem? I'd assume if the PurgeWhenFull filter is running correctly, I'd never run into this situation.
The problem is when it explodes, the VM won't even start, so I can't access it to get any information from logs, free-up space, etc. Just restore the snapshot.
Any thoughts appreciated.
My ZM VM has a 1TiB VHD and the PurgeWhenFull filter is running and set to 89% of the disk space. I even set a cron job to alert me if the disk space utilization went above 90% and runs every hour, but didn't get any alert. It seems that it might have happened within the hour, between cron runs, as I've tested the script for a full day with a low threshold and got alerts.
Today, after couple hours running, my partitions shows:
Code: Select all
[root@zoneminder ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/cl-root 1015574016 101567680 914006336 11% /
devtmpfs 1930500 0 1930500 0% /dev
tmpfs 1941156 587656 1353500 31% /dev/shm
tmpfs 1941156 16916 1924240 1% /run
tmpfs 1941156 0 1941156 0% /sys/fs/cgroup
/dev/sda1 1038336 233756 804580 23% /boot
/dev/mapper/cl-home 52403200 32964 52370236 1% /home
tmpfs 388232 0 388232 0% /run/user/0
Is there anything in the logs that would help me fix this problem? I'd assume if the PurgeWhenFull filter is running correctly, I'd never run into this situation.
The problem is when it explodes, the VM won't even start, so I can't access it to get any information from logs, free-up space, etc. Just restore the snapshot.
Any thoughts appreciated.