"Shared data size conflict in shared_data for monitor"

Discussions related to the 1.36.x series of ZoneMinder
keithp
Posts: 16
Joined: Sat Aug 06, 2022 12:44 am

Re: "Shared data size conflict in shared_data for monitor"

Post by keithp »

Thanks for your efforts iConner! I put out .24 last night around 19:00 GMT-4 and it did seem to be better initially. However, I think there is still something weird going on.

I have a high load now. High enough to be noticeable and logging into the system this morning took many minutes. The console is much slower to respond (when it does). When I try to pull up a montage (I have 3 cams), I am now consistently getting "This page isn’t working" is Chrome. That is also the message I get when the console does not come up.

What prompted me login was that I was seeing a lot of oom messages (I wasn't sure how much to include so forgive the data blast)...
Aug 7 08:10:49 zm-nvr kernel: [47648.219770] zmc invoked oom-killer: gfp_mask=0x100dca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), order=0, oom_score_adj=0
Aug 7 08:10:49 zm-nvr kernel: [47648.219791] oom_kill_process.cold+0xb/0x10
Aug 7 08:10:49 zm-nvr kernel: [47648.219919] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
Aug 7 08:10:49 zm-nvr kernel: [47648.220061] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/zoneminder.service,task=zmc,pid=12151,uid=33
Aug 7 08:10:49 zm-nvr kernel: [47648.220108] Out of memory: Killed process 12151 (zmc) total-vm:7445376kB, anon-rss:6992632kB, file-rss:0kB, shmem-rss:10808kB, UID:33 pgtables:14132kB oom_score_adj:0
Aug 7 08:23:43 zm-nvr kernel: [48422.979391] connection invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
Aug 7 08:23:43 zm-nvr kernel: [48422.979407] oom_kill_process.cold+0xb/0x10
Aug 7 08:23:43 zm-nvr kernel: [48422.979529] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
Aug 7 08:23:43 zm-nvr kernel: [48422.979603] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/zoneminder.service,task=zmc,pid=12297,uid=33
Aug 7 08:23:43 zm-nvr kernel: [48422.979638] Out of memory: Killed process 12297 (zmc) total-vm:7247536kB, anon-rss:6748884kB, file-rss:48kB, shmem-rss:10808kB, UID:33 pgtables:13656kB oom_score_adj:0
Aug 7 08:51:10 zm-nvr kernel: [50068.876139] systemd-journal invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=-250
Aug 7 08:51:10 zm-nvr kernel: [50068.876156] oom_kill_process.cold+0xb/0x10
Aug 7 08:51:10 zm-nvr kernel: [50068.876302] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
Aug 7 08:51:10 zm-nvr kernel: [50068.876392] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/zoneminder.service,task=zmc,pid=12476,uid=33
Aug 7 08:51:10 zm-nvr kernel: [50068.876427] Out of memory: Killed process 12476 (zmc) total-vm:6071536kB, anon-rss:5615604kB, file-rss:1968kB, shmem-rss:10808kB, UID:33 pgtables:11408kB oom_score_adj:0
Aug 7 09:04:52 zm-nvr kernel: [50891.002854] ib_pg_flush_co invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
Aug 7 09:04:52 zm-nvr kernel: [50891.002871] oom_kill_process.cold+0xb/0x10
Aug 7 09:04:52 zm-nvr kernel: [50891.003028] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
Aug 7 09:04:52 zm-nvr kernel: [50891.003141] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/zoneminder.service,task=zmc,pid=12628,uid=33
Aug 7 09:04:52 zm-nvr kernel: [50891.003182] Out of memory: Killed process 12628 (zmc) total-vm:7574176kB, anon-rss:7132952kB, file-rss:2068kB, shmem-rss:10808kB, UID:33 pgtables:14412kB oom_score_adj:0
Aug 7 09:17:31 zm-nvr kernel: [51650.298751] zmc invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
Aug 7 09:17:31 zm-nvr kernel: [51650.298765] oom_kill_process.cold+0xb/0x10
Aug 7 09:17:31 zm-nvr kernel: [51650.298910] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
Aug 7 09:17:31 zm-nvr kernel: [51650.299062] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/zoneminder.service,task=zmc,pid=12684,uid=33
Aug 7 09:17:31 zm-nvr kernel: [51650.299109] Out of memory: Killed process 12684 (zmc) total-vm:7182236kB, anon-rss:6642200kB, file-rss:1540kB, shmem-rss:10808kB, UID:33 pgtables:13440kB oom_score_adj:0
Aug 7 09:28:31 zm-nvr kernel: [52310.179085] ib_pg_flush_co invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
Aug 7 09:28:31 zm-nvr kernel: [52310.179101] oom_kill_process.cold+0xb/0x10
Aug 7 09:28:31 zm-nvr kernel: [52310.179229] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
Aug 7 09:28:31 zm-nvr kernel: [52310.179303] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/zoneminder.service,task=zmc,pid=12801,uid=33
Aug 7 09:28:31 zm-nvr kernel: [52310.179329] Out of memory: Killed process 12801 (zmc) total-vm:7769972kB, anon-rss:7235252kB, file-rss:1820kB, shmem-rss:10808kB, UID:33 pgtables:14628kB oom_score_adj:0
Aug 7 09:46:21 zm-nvr kernel: [53380.631407] zmwatch.pl invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
Aug 7 09:46:21 zm-nvr kernel: [53380.631425] oom_kill_process.cold+0xb/0x10
Aug 7 09:46:21 zm-nvr kernel: [53380.631588] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
Aug 7 09:46:21 zm-nvr kernel: [53380.631735] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/zoneminder.service,task=zmc,pid=12863,uid=33
Aug 7 09:46:21 zm-nvr kernel: [53380.631769] Out of memory: Killed process 12863 (zmc) total-vm:6993912kB, anon-rss:6538912kB, file-rss:1164kB, shmem-rss:10808kB, UID:33 pgtables:13236kB oom_score_adj:0
Aug 7 10:02:53 zm-nvr kernel: [54371.796737] rm invoked oom-killer: gfp_mask=0x100dca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), order=0, oom_score_adj=0
Aug 7 10:02:53 zm-nvr kernel: [54371.796757] oom_kill_process.cold+0xb/0x10
Aug 7 10:02:53 zm-nvr kernel: [54371.796856] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
Aug 7 10:02:53 zm-nvr kernel: [54371.796968] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/zoneminder.service,task=zmc,pid=13027,uid=33
Aug 7 10:02:53 zm-nvr kernel: [54371.797007] Out of memory: Killed process 13027 (zmc) total-vm:8040120kB, anon-rss:7476596kB, file-rss:2688kB, shmem-rss:10808kB, UID:33 pgtables:15084kB oom_score_adj:0
Aug 7 10:11:02 zm-nvr kernel: [54861.869364] ib_io_wr-1 invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
Aug 7 10:11:03 zm-nvr kernel: [54861.869380] oom_kill_process.cold+0xb/0x10
Aug 7 10:11:03 zm-nvr kernel: [54861.869504] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
Aug 7 10:11:03 zm-nvr kernel: [54861.869579] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/zoneminder.service,task=zmc,pid=13145,uid=33
Aug 7 10:11:03 zm-nvr kernel: [54861.869635] Out of memory: Killed process 13145 (zmc) total-vm:9423028kB, anon-rss:8943084kB, file-rss:0kB, shmem-rss:10808kB, UID:33 pgtables:17984kB oom_score_adj:0
Aug 7 12:23:35 zm-nvr kernel: [62814.554880] zmdc.pl invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
Aug 7 12:23:35 zm-nvr kernel: [62814.554900] oom_kill_process.cold+0xb/0x10
Aug 7 12:23:35 zm-nvr kernel: [62814.555093] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
Aug 7 12:23:35 zm-nvr kernel: [62814.555244] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/zoneminder.service,task=zmc,pid=13921,uid=33
Aug 7 12:23:35 zm-nvr kernel: [62814.555293] Out of memory: Killed process 13921 (zmc) total-vm:8895608kB, anon-rss:8374020kB, file-rss:344kB, shmem-rss:10808kB, UID:33 pgtables:16888kB oom_score_adj:0
I don't see why I would have memory issues now. The configuration has been 16Gb with 75% (12Gb) assigned to /dev/shm at boot. According to "free -ht" I have 11Gi under "available". When I can pull the console, it continues to say /dev/shm is at 0% but the 3 zm.mmap files are there. Each is 11Mb.

I looked at the "Upgrade ZM 1.34 to 1.36 Ubuntu 20.04" again and other than the items mentioned, is there something about 1.36 that is majorly different about 1.34? It feels like I have some sort of tuning or configuration problem again.
User avatar
iconnor
Posts: 2862
Joined: Fri Oct 29, 2010 1:43 am
Location: Toronto
Contact:

Re: "Shared data size conflict in shared_data for monitor"

Post by iconnor »

Did you reduce your ImageBuffers to something like 3? You also don't need to allocate a larger shm. We don't use shm very much anymore. If you havn't set ImageBuffers then you will be using double the ram.

You may also want to set a MaxImageBuffers to roughly whatever your old ImageBUffers setting was. Or higher. It will only be used if needed.
Magic919
Posts: 1381
Joined: Wed Sep 18, 2013 6:56 am

Re: "Shared data size conflict in shared_data for monitor"

Post by Magic919 »

It might be a problem in 1.36.24. One of my cameras grew to using about 6GB of RAM, whereas 1-2 was the old max.

I've rolled back again.
-
keithp
Posts: 16
Joined: Sat Aug 06, 2022 12:44 am

Re: "Shared data size conflict in shared_data for monitor"

Post by keithp »

iconnor wrote: Sun Aug 07, 2022 5:50 pm Did you reduce your ImageBuffers to something like 3? You also don't need to allocate a larger shm. We don't use shm very much anymore. If you havn't set ImageBuffers then you will be using double the ram.

You may also want to set a MaxImageBuffers to roughly whatever your old ImageBUffers setting was. Or higher. It will only be used if needed.
It was set to 3 (which I think was the default as a result of the update) so I bumped that to 6 for all the cams. MaxImageBuffers I had increased to 1024 from the previous ImageBuffers values I had because I saw in the logs it was too low when I was having the problems with the .23 release. I also dropped /dev/shm to 25% of memory or in my case 4G but as you said that is not used a much and as expected doubling ImageBuffers doubled /dev/shm to 64M.

So far, the load is much lower at 2 to 6 and system is much more responsive. The montage and log links now respond faster than in version 1.34.x. In fact, montage for all 3 cams doesn't appear to be freezing every couple of seconds like it used to. The montage view does seem to get "locked" and I can not go back to the console link until I close that tab or window which generates this:
8/7/22, 11:39:45 PM EDT web_php 20624 ERR socket_sendto( /run/zm/zms-003668s.sock ) failed: Connection refused /usr/share/zoneminder/www/includes/functions.php 1880
I typically open montage in another window and close it when I'm done. This does seem like an appropriate message after a window is closed though. I'm just not sure why I could not open the console link after the montage link in the same window. Still, this is looking much better now.

The real test will be during the day when I have more alerts going of. Typically those all that analysis (mocord on 2 cams, record on 1) is what stresses the system and keeps me from having the montage up for very long. Increasing ImageBuffers again seems like it might help with that.

I'll report how things go either way. Thanks again iConner 👍
keithp
Posts: 16
Joined: Sat Aug 06, 2022 12:44 am

Re: "Shared data size conflict in shared_data for monitor"

Post by keithp »

Ok, something is indeed still off...
[Mon Aug 8 06:16:47 2022] Out of memory: Killed process 23151 (zmc) total-vm:6609636kB, anon-rss:6163948kB, file-rss:1036kB, shmem-rss:21608kB, UID:33 pgtables:12516kB oom_score_adj:0
[Mon Aug 8 07:28:42 2022] Out of memory: Killed process 23753 (zmc) total-vm:7389316kB, anon-rss:6948588kB, file-rss:1480kB, shmem-rss:21608kB, UID:33 pgtables:14064kB oom_score_adj:0
[Mon Aug 8 08:23:02 2022] Out of memory: Killed process 24247 (zmc) total-vm:7258292kB, anon-rss:6715832kB, file-rss:2688kB, shmem-rss:21608kB, UID:33 pgtables:13612kB oom_score_adj:0
[Mon Aug 8 10:51:02 2022] Out of memory: Killed process 25591 (zmc) total-vm:8566068kB, anon-rss:8066496kB, file-rss:1568kB, shmem-rss:21608kB, UID:33 pgtables:16260kB oom_score_adj:0
[Mon Aug 8 11:04:24 2022] Out of memory: Killed process 25616 (zmc) total-vm:7857656kB, anon-rss:7348536kB, file-rss:1492kB, shmem-rss:21608kB, UID:33 pgtables:14852kB oom_score_adj:0
[Mon Aug 8 11:19:14 2022] Out of memory: Killed process 25676 (zmc) total-vm:6741300kB, anon-rss:6250016kB, file-rss:1644kB, shmem-rss:21608kB, UID:33 pgtables:12692kB oom_score_adj:0
[Mon Aug 8 11:49:55 2022] Out of memory: Killed process 25962 (zmc) total-vm:6602844kB, anon-rss:6079932kB, file-rss:660kB, shmem-rss:21608kB, UID:33 pgtables:12360kB oom_score_adj:0
[Mon Aug 8 12:00:21 2022] Out of memory: Killed process 25953 (zmc) total-vm:7922920kB, anon-rss:7461116kB, file-rss:1104kB, shmem-rss:21608kB, UID:33 pgtables:15200kB oom_score_adj:0
During this entire time, I was not interacting with the system. I was getting alerts as activity increased (no sense of how fast the analyzing is proceeding yet or if there is a resource issue with memory / cpu) as well.

As soon as I started to interact with the system there was noticeable slowness. Not as bad a cited before the adjustment I made 2 days ago though. The console timed out twice but I was still able to pull up the montage. I did see the system load go to about 9 around then but the montage still works as good as it did after I bumped the ImageBuffers up. However, there are pauses which I expected since there are more analytics doing to the day. The worst pause I observed was 13 seconds resulting in no alert being asserted which was the correct decision.

I bumped the ImageBuffers up to 12 on the 2 cams that are doing analysis to see if that stops the out of memory issue but if there is another suggestion I'll try that too.
User avatar
iconnor
Posts: 2862
Joined: Fri Oct 29, 2010 1:43 am
Location: Toronto
Contact:

Re: "Shared data size conflict in shared_data for monitor"

Post by iconnor »

I'd really like to see screen caps of your monitor settings.

ImageBuffers are not used in analysis. Only in viewing. So the default is 3, but I've seen some choppyness at that so sometimes I've gone up to 5.
You should probably set a MaxImageBuffer count to prevent out of memory.

I have to assume that these cams have very large keyframe intervals.
keithp
Posts: 16
Joined: Sat Aug 06, 2022 12:44 am

Re: "Shared data size conflict in shared_data for monitor"

Post by keithp »

iconnor wrote: Wed Aug 10, 2022 4:41 pm I'd really like to see screen caps of your monitor settings.

ImageBuffers are not used in analysis. Only in viewing. So the default is 3, but I've seen some choppyness at that so sometimes I've gone up to 5.
You should probably set a MaxImageBuffer count to prevent out of memory.

I have to assume that these cams have very large keyframe intervals.
I attached my 720p cam buffers settings. I wasn't sure which other screens you needed. Today is the first day I did see the system run out of memory though. Load was over 12 and I had to restart zoneminder to clear things up but it pretty quickly went from 12Gb free to 1.8Gb.
front door buffers.png
front door buffers.png (66.83 KiB) Viewed 2875 times
I haven't adjusted settings in long time so there could be things that need to be played with again but I did just noticed that the estimated ram use stated here is 3.75Gb with my max buffers set to 1024. I have 16Gb on RAM so all 3 cams would max out at 11.25Gb leaving a bit over 4Gb for everything else. That does seem a bit tight so I can see why I'm running out if what if I'm understanding how the math works.

Also, in terms of key frames. Is that something I need to check on the camera or is this something zoneminder detects? On this particular cam what is coming over is a 720p (1280x720) rtsp video stream with audio.
Magic919
Posts: 1381
Joined: Wed Sep 18, 2013 6:56 am

Re: "Shared data size conflict in shared_data for monitor"

Post by Magic919 »

It’s something to check on the camera.
-
User avatar
iconnor
Posts: 2862
Joined: Fri Oct 29, 2010 1:43 am
Location: Toronto
Contact:

Re: "Shared data size conflict in shared_data for monitor"

Post by iconnor »

Yeah... so 1024 seems high for maxImageBufferCount. It shouldn't use it unless needed but maybe there is a bug in there that I havn't thought of. I have only 1 camera with anywhere near that huge of a keyframe interval. I find 300 does it for my h265+ streams.

As I mentioned ImageBuffer can likely go back to 3 or 5.

The warmup frames is interesting. So what that does is it makes ZM wait until it has captured that many frames before it even starts doing analysis. So you can safely set that to 0.

StreamImageBuffer is used by zms... it writes the jpegs to disk so that you can pause and rewind. If you don't need to rewind live view, then you can set that to 0 for a big performance boost. If you are going to leave this non zero then you should make sure that ZM_PATH_SWAP points to a ram disk for speed.
User avatar
iconnor
Posts: 2862
Joined: Fri Oct 29, 2010 1:43 am
Location: Toronto
Contact:

Re: "Shared data size conflict in shared_data for monitor"

Post by iconnor »

Hey good news. I am able to reproduce the problem! I set my settings to yours and yeah, it eats ram like crazy! So... maybe a fix tonight.
keithp
Posts: 16
Joined: Sat Aug 06, 2022 12:44 am

Re: "Shared data size conflict in shared_data for monitor"

Post by keithp »

iconnor wrote: Wed Aug 10, 2022 7:43 pm Hey good news. I am able to reproduce the problem! I set my settings to yours and yeah, it eats ram like crazy! So... maybe a fix tonight.
Thanks iConner. I appreciate your efforts! Its definitely a RAM monster now :(

After I last posted, I ended up adding another 8Gb to the system and setting the cams to just record. At least (I hope) I have enough video data if needed until this is solved.

I agree, I thought 1024 was too high too but the reason I had it up that high was because of messages like these:
Aug 10 00:07:47 zm-nvr zmc_m5[47926]: WAR [zmc_m5] [You have set the max video packets in the queue to 1024. The queue is full. Either Analysis is not keeping up or your camera's keyframe interval is larger than this setting.]
Aug 10 00:07:47 zm-nvr zmc_m5[47926]: WAR [zmc_m5] [Found locked packet when trying to free up video packets. This basically means that decoding is not keeping up.]
This particular camera is only a 720p unit (its a Akuvox R20A). After I added memory, I bump it to 1200 and still got messages like that. Unfortunately, there is nowhere I see to set the keyframe interval but I am pushing a high bit rate. I've attached that settings screen in case something jump out to you. I guessing I could drop the bitrate but I'll wait for you to respond.

Thanks for the other recommendations as well. I definitely don't rewind the "live view"- I forgot you could do that! Typically, I'm just investigating alerts as needed or scrubbing through video segments to find and identify people doing bad things.
akuvoxR20a-RTSP.png
akuvoxR20a-RTSP.png (161.55 KiB) Viewed 2844 times
Post Reply