Memory Leak/Increase and Choppy events due to Maximum Image Buffer Size

Discussions related to the 1.36.x series of ZoneMinder
ergamus
Posts: 27
Joined: Sat May 29, 2021 10:09 am

Re: Memory Leak/Increase and Choppy events due to Maximum Image Buffer Size

Post by ergamus »

One my HikVision cameras is running completely fine with the auto buffer size, capping out at around 1GiB memory usage. My other unknown hacked Goke Chinese CloudCam AliExpress piece of garbage is throwing out errors even with 200 max buffer size. Using auto buffer size will keep increasing this to about 4-5GB before zmwatch decides no new images have arrived and resets all my cameras again. Which is odd, considering I'm looking at the network traffic and there is no drop in activity.
fran.roo
Posts: 5
Joined: Mon May 17, 2021 1:30 am

Re: Memory Leak/Increase and Choppy events due to Maximum Image Buffer Size

Post by fran.roo »

Running
ZM 1.36.3 under Ubuntu Server 20.04.2
2 cameras 1920 x 1072

Was getting the following error in the logs for the third Trendnet camera running at 1920 x 1080:
You have set the max video packets in the queue to 2. The queue is full. Either Analysis is not keeping up or your camera's keyframe interval is larger than this setting. We are dropping packets.

While I could view what I thought was a live view, there was no time elapse for the Date/Time display, it was just a frozen image.
Slowly started increasing the Max Image Buffer size frame until I chased the errors out of the log.
Maximum Image Buffer Size (frames) now set to 35
ergamus
Posts: 27
Joined: Sat May 29, 2021 10:09 am

Re: Memory Leak/Increase and Choppy events due to Maximum Image Buffer Size

Post by ergamus »

Also getting memory leaks, again I believe from one specific camera:
"Killed /system.slice/zoneminder.service due to swap used (7346442240) / total (8162111488) being more than 90.00%"

The swap device is zram with zstd, so whatever the memory being leaked is, it's highly compressible. If it was old frames being held in memory, it would have most likely gotten reaped by systemd when it hit around 4GB.

Another thing that helped me with the packet drops from full buffer is setting the frame analysis FPS to be higher than the actual capture FPS. The camera giving me the most issues sends at 12.5FPS, but I've got analysis set at 20.
Goaliegeek
Posts: 2
Joined: Sat Jun 05, 2021 9:10 pm

Re: Memory Leak/Increase and Choppy events due to Maximum Image Buffer Size

Post by Goaliegeek »

I am also having this issue since going from 1.35 to 1.36. Since 1.36 (currently on 1.36.3) I didn't change any settings on my cameras and noticed live feeds and recorded videos would skip, pause, and have artifacts on motion. A person walking would freeze while trees in another part of the video moved fine, then it would catch up. Seems like buffering issues. RAM usage is also much, much higher than before. The 2 most common and reoccurring errors in my log are:

2021-06-05 15:14:17 zmc_m1 1568 WAR You have set the max video packets in the queue to 122. The queue is full. Either Analysis is not keeping up or your camera's keyframe interval is larger than this setting. We are dropping packets. zm_packetqueue.cpp 92

2021-06-05 15:14:17 zmc_m1 1568 ERR Unable to free up older packets. Not queueing this video packet. zm_packetqueue.cpp 135

The only way to fix the video and keep everything smooth was to set Maximum Image Buffer Size (frames) to 0, but them my system starts to crash and the log goes crazy with every error in the book.
manjotsc
Posts: 9
Joined: Sat Aug 15, 2020 11:16 am

Re: Memory Leak/Increase and Choppy events due to Maximum Image Buffer Size

Post by manjotsc »

Any fix?
User avatar
iconnor
Posts: 2862
Joined: Fri Oct 29, 2010 1:43 am
Location: Toronto
Contact:

Re: Memory Leak/Increase and Choppy events due to Maximum Image Buffer Size

Post by iconnor »

Anyone up to trying current master?

The only thing we are doing differently is waiting instead of dropping packets. Hopefully it is a better solution that dropping the packet entirely.

Have also been looking into why conversion from yuv420p to rgba is so slow. Havn't figured it out though.

What we NEED is a yuv motion detector. Then we wouldn't need to do that conversion and could use 1/4 of the ram.
theogre
Posts: 12
Joined: Tue May 25, 2021 4:51 am

Re: Memory Leak/Increase and Choppy events due to Maximum Image Buffer Size

Post by theogre »

iconnor wrote: Sat Jun 05, 2021 10:14 pm Anyone up to trying current master?

The only thing we are doing differently is waiting instead of dropping packets. Hopefully it is a better solution that dropping the packet entirely.

Have also been looking into why conversion from yuv420p to rgba is so slow. Havn't figured it out though.

What we NEED is a yuv motion detector. Then we wouldn't need to do that conversion and could use 1/4 of the ram.
I would happily try whatever version/update necessary to get back on track because at this point, I'm having to manually restart zoneminder daily to keep things operational.
dougmccrary
Posts: 1150
Joined: Sat Aug 31, 2019 7:35 am
Location: San Diego

Re: Memory Leak/Increase and Choppy events due to Maximum Image Buffer Size

Post by dougmccrary »

I'm using 1.37 with 14 assorted cameras, running at 5 to 12 fps and 640 x 362 to 1280 x 720.

I seem to have more swap in 1.37 than 1.36,
The 1.36 machine is Xeon @2.8GHz 6 core 6GB, 3GB swap on Ubuntu 18.04 while the 1.37 machine is Phenom @3.3GHz 2 core 8GB, 4gb swap on Ubuntu 20.04. Swappiness is at 10 on both.

The camera settings are mostly the same, except for a couple on the 1.36 I'm playing with trying out the setting suggestions people have mentioned.
theogre
Posts: 12
Joined: Tue May 25, 2021 4:51 am

Re: Memory Leak/Increase and Choppy events due to Maximum Image Buffer Size

Post by theogre »

Just got around to updating zoneminder to 1.37 from the master branch. Will keep the thread updated as things progress.

Would like to add that over the past few days, things have been getting worse with 1.36.3. RAM was maxing out after about 30 hours, now it's taking less then 12, and I'm getting smearing/glitchyness in monitors that I was not having just a few days ago.

However, with the 1.37 update, hopefully some of this will be solved. 2:39pm local time and will update here if there's any change.

UPDATE: 30 minutes later we are up 500mb and getting errors about freeing up packets.

Code: Select all

2021-06-07 15:10:14	zmc_m9	ogreraid	1301	ERR	Unable to free up older packets. Waiting.	zm_packetqueue.cpp	136
2021-06-07 15:13:28	zmc_m9	ogreraid	1301	WAR	You have set the max video packets in the queue to 51. The queue is full. Either Analysis is not keeping up or your camera's keyframe interval is larger than this setting. We are dropping packets.
Watching htop, I can see the ZMC processes for monitors with motion detection are rising in RESident memory, starting around 409M and steadily increasing, the monitor 9 in question started at 409 and in 30 minutes has risen to 588M.

I increased the buffer from 51, which is already much higher than it was set in 1.34, to 150 for testing, however I don't see a reason 51 wouldn't be enough as all other buffers besides "Image Buffer Size = 5" are set to 0. Again, restarting the monitor, RAM steadies at 409M, then after a short period, jumps to 487M, and continues this pattern, slowly increasing in RAM until the server is maxed out.
theogre
Posts: 12
Joined: Tue May 25, 2021 4:51 am

Re: Memory Leak/Increase and Choppy events due to Maximum Image Buffer Size

Post by theogre »

3.5 hour update.
With the upgrade to 1.37, things are now even worse. The monitors that were rising in RAM usage are still rising, but now even my 1080p6fps, streams are at almost 3x their normal usage and they are now causing the server to run out of ram, consuming 495-671M each.

Restarting zoneminder brings the RAM back down, and after about 5 minutes things are stable, then the issues start all over again.

Secondary question, is there anyway for me to port back to 1.34 since any version higher than that is almost unusable due to the memory leaks?
Bearded_Beef
Posts: 18
Joined: Thu Mar 11, 2021 5:51 pm

Re: Memory Leak/Increase and Choppy events due to Maximum Image Buffer Size

Post by Bearded_Beef »

I ended up completely nuking my setup and went back to 1.34 (stupid me auto deleting every backup from then). Back to being rock solid. Bonus: event server installed easily and flawlessly (after banging my head for weeks trying to update it). Ended up spending a few hours replacing what I have been trying to fix for weeks. I still have the old image in case any info is needed, I can boot back up into 1.36.3
yannlieb
Posts: 14
Joined: Fri Mar 19, 2021 9:40 am

Re: Memory Leak/Increase and Choppy events due to Maximum Image Buffer Size

Post by yannlieb »

I tried to play with various setups.
I have a camera zmc process jumping to over 2.4GB, another one reaching 1Gb, the 2 others around 700Mb. This is generating repetitive crashes as soon as there is a detection process starting as the system is 6GB. I tried to limit the FPS, remove the limits, put some Image buffer size, facing the non-emptying queue, removing it. I can't really find a pattern. The camera are cheap 720p ones.
theogre
Posts: 12
Joined: Tue May 25, 2021 4:51 am

Re: Memory Leak/Increase and Choppy events due to Maximum Image Buffer Size

Post by theogre »

Speaking with iconnor about the issue, this issue seems to be one stemming from mySQL. From what it seems, it only affects certain systems, so if you, like myself, are suffering with it, unfortunately it seems the only mitigation is remaining on 1.34, or using a script to detect when RAM is full and restart the zoneminder service.
Magic919
Posts: 1381
Joined: Wed Sep 18, 2013 6:56 am

Re: Memory Leak/Increase and Choppy events due to Maximum Image Buffer Size

Post by Magic919 »

Worth mentioning that the discussion is on the Slack channel for any interested parties to take a look.
-
ergamus
Posts: 27
Joined: Sat May 29, 2021 10:09 am

Re: Memory Leak/Increase and Choppy events due to Maximum Image Buffer Size

Post by ergamus »

theogre wrote: Wed Jun 09, 2021 1:33 am Speaking with iconnor about the issue, this issue seems to be one stemming from mySQL. From what it seems, it only affects certain systems, so if you, like myself, are suffering with it, unfortunately it seems the only mitigation is remaining on 1.34, or using a script to detect when RAM is full and restart the zoneminder service.
So what is it? A race condition? The analysis part of ZM being jammed up because it's waiting for INSERTS to hit the database? Can this be solved by optimizing for I/O? f*** it, I'll put the database on a seperate SSD to fix this if necessary. The way you've written your post it seems like it's unfixable.

edit: I've done some tweaks to my config. Increase buffer pool size, allowed up to 50% of buffer for write buffering (ALL, Inserts/Deletes/Updates..), increase I/O capacity values to better reflect storage, disabled doublewrite buffering, doubled the amount of read/write/purge I/O threads. Let's see if it improves the situation.

edit2: Off the bat Zoneminder is more responsive, bit too early to tell about memory leaks. It almost seems like it's reduced CPU usage. Worth optimizing your MySQL/MariaDB settings!

edit3: Memory still exceeding any limits set by the user. Back to the */6 hour crontab restart entry.
Post Reply