Page 1 of 1

High disk utilization

Posted: Wed Jul 11, 2018 9:39 pm
by Waylorn
I am recording 20 cameras 24x7. 15 1280x720 and 5x 1080p. I have a mdadm raid 5 with 9tb I want to put on rotation and continuous record all the cameras. The system seems to be running ok but the disk is at 74%. Is there anything I can do to lower this besides dropping the resolution? I am at 5 fps on all the cameras using h264/ffmpeg.

Re: High disk utilization

Posted: Thu Jul 12, 2018 12:48 am
by bbunge
32 bit color may help a bit.
Yes, lower resolution will help.
You should be OK with the 74% give or take drive space. Just check the database resources (mysqltuner) to see if you need to allocate more innodb_buffer_pool_size. Might be good to put the database on another drive. Set up a drive purge based on days and adjust the days till your RAID gets up to about 90% (purge based on days will run at midnight). Do not depend on the purge when full of 95% to keep the system running because with 20 cameras running you could add events faster than a percent purge can remove them.

If you really do not need the cameras recording all day you can set up a cron job to switch between modect and record or mocord.

Re: High disk utilization

Posted: Fri Jul 13, 2018 8:44 pm
by Waylorn
I have that filter configured and I am going to let it run over the weekend. One thing that seems to be going on is that Zoneminder is seeing the disk size of the / disk. Not the raid array mounted as /videos. How can I get the website to see the full size of my array?

Re: High disk utilization

Posted: Sat Jul 14, 2018 2:08 am
by bbunge
Are you using the systemd mount? If not i recommend you set it up again to use systemd. Instructions in the Zoneminder Ubuntu WIKI.

Re: High disk utilization

Posted: Sat Jul 28, 2018 5:15 am
by doubleosix
Are your sure that the disk is actually full? I was having an issue where zoneminder was showing 95% disk usage after only a few hours. I found out that my zoneminder was recording to the "root" directory / partition and that "root" was only 3.3gb instead of the 4.4tb "user" directory / partition.

I ended up having to log in as root, back up the "user" directory, delete the "user" directory / partition. recreate the user partition as 400gb, and grow the "root" partition to 4tb and then create a new mount point for the "user" directory. After that I had to restore the backup for the "user" directory onto the new 400gb partition. The whole process took about 20 minutes, including finding instructions of how to backup, repartition, and restore everything.

Here is the link for the instructions I used. I use CentOS so they may not work if you use another Linux distribution. Oh it doesn't say it in the instructions, but, you must be logged in as root in order to unmount the /user partition.
https://serverfault.com/questions/77192 ... on-centos7

Re: High disk utilization

Posted: Sat Jul 28, 2018 7:15 pm
by bbunge
Using multiple partitions is old school. Modern distros use one partition by default. Well, maybe two, one swap and one ext4. Bionic uses one partition with a swap file.

Re: High disk utilization

Posted: Sun Aug 05, 2018 1:58 am
by Lee Sharp
bbunge wrote:
Sat Jul 28, 2018 7:15 pm
Using multiple partitions is old school. Modern distros use one partition by default. Well, maybe two, one swap and one ext4. Bionic uses one partition with a swap file.
This only works if you have one disk. If you want videos on the NFS mount on the storage array and you want mysql to run on the nvme drive, you will have more then one "partition."