ZM 1.36.32 in a container inside a VM generating tons of disk writes

Discussions related to the 1.36.x series of ZoneMinder
Post Reply
maddios
Posts: 30
Joined: Wed Oct 27, 2021 7:18 am

ZM 1.36.32 in a container inside a VM generating tons of disk writes

Post by maddios »

Hi guys ran into an interesting issue.

I'm running the ZM inside a docker container inside a kvm VM (on proxmox) and it's behaving like normal mostly but I noticed a strange issue, it's generating TONS of disk write while streaming to clients via /zoneminder/cgi-bin/nph-zms

The container was built from here: https://github.com/zoneminder-container ... inder-base using docker-compose

each of the zms instances is generating like 2MiB/s disk write. I'm pretty sure this is supposed to go to a memory buffer and be streamed out but for some reason it's actually writing it to a real file. The disk usage isn't actually growing so I'm guessing linux is handling the buffer correctly, however the VM's underlying storage is ZFS and every hour it registers a few GB of "changes" so my daily snapshot is like 570GB meanwhile no real data is changing.

PS: the zms clients are streaming data to 3 tablets each streaming 5 cameras that I have around the house to keep an eye on my property, so I can't really get rid of them.

here's output from iotop -P, showing about 1-2.5 M/s disk write.

Code: Select all

 797268 be/4 docker-w    0.00 B/s    2.58 M/s  ?unavailable?  nph-zms
 798772 be/4 docker-w    0.00 B/s    2.15 M/s  ?unavailable?  nph-zms
 800219 be/4 docker-w    0.00 B/s    2.15 M/s  ?unavailable?  nph-zms
 797269 be/4 docker-w    0.00 B/s    2.02 M/s  ?unavailable?  nph-zms
 800218 be/4 docker-w    0.00 B/s    2.02 M/s  ?unavailable?  nph-zms
 798771 be/4 docker-w    0.00 B/s    2.02 M/s  ?unavailable?  nph-zms
 797267 be/4 docker-w    0.00 B/s 1271.80 K/s  ?unavailable?  nph-zms
 798774 be/4 docker-w    0.00 B/s 1271.80 K/s  ?unavailable?  nph-zms
 800217 be/4 docker-w    0.00 B/s 1271.80 K/s  ?unavailable?  nph-zms
 797266 be/4 docker-w    0.00 B/s  972.35 K/s  ?unavailable?  nph-zms
 798775 be/4 docker-w    0.00 B/s  972.35 K/s  ?unavailable?  nph-zms
 800220 be/4 docker-w    0.00 B/s  972.35 K/s  ?unavailable?  nph-zms
 797265 be/4 docker-w    0.00 B/s  232.52 K/s  ?unavailable?  nph-zms
 798770 be/4 docker-w    0.00 B/s  193.77 K/s  ?unavailable?  nph-zms
 800221 be/4 docker-w    0.00 B/s  193.77 K/s  ?unavailable?  nph-zms
As far as I can tell the only relevant lsof entries for that first zms are:

Code: Select all

nph-zms   797268                           docker-www  txt       REG               0,56    19432536   30299455 /zoneminder/cgi-bin/zms

nph-zms   797268                           docker-www    0r     FIFO               0,13         0t0    4988607 pipe
nph-zms   797268                           docker-www    1w     FIFO               0,13         0t0    4988608 pipe
nph-zms   797268                           docker-www    2w     FIFO               0,13         0t0    4988609 pipe
nph-zms   797268                           docker-www    3u     sock                0,8         0t0    4961073 protocol: UNIX-STREAM
nph-zms   797268                           docker-www    4u      REG              0,113    18667960        361 /dev/shm/zm.mmap.12
nph-zms   797268                           docker-www    5u     sock                0,8         0t0    4996695 protocol: TCP
nph-zms   797268                           docker-www    6wW     REG               0,56           0   30299779 /zoneminder/run/zms-1939318.lock
nph-zms   797268                           docker-www    7u     sock                0,8         0t0    4992582 protocol: UNIX
nph-zms   797268 797274 nph-zms            docker-www  cwd       DIR               0,56        4096   30299453 /zoneminder/cgi-bin
nph-zms   797268 797274 nph-zms            docker-www  rtd       DIR               0,56        4096   30299514 /
nph-zms   797268 797274 nph-zms            docker-www  txt       REG               0,56    19432536   30299455 /zoneminder/cgi-bin/zms
nph-zms   797268 797274 nph-zms            docker-www  mem       REG              0,113                    361 /dev/shm/zm.mmap.12 (stat: No such file or directory)

nph-zms   797268 797274 nph-zms            docker-www    0r     FIFO               0,13         0t0    4988607 pipe
nph-zms   797268 797274 nph-zms            docker-www    1w     FIFO               0,13         0t0    4988608 pipe
nph-zms   797268 797274 nph-zms            docker-www    2w     FIFO               0,13         0t0    4988609 pipe
nph-zms   797268 797274 nph-zms            docker-www    3u     sock                0,8         0t0    4961073 protocol: UNIX-STREAM
nph-zms   797268 797274 nph-zms            docker-www    4u      REG              0,113    18667960        361 /dev/shm/zm.mmap.12
nph-zms   797268 797274 nph-zms            docker-www    5u     sock                0,8         0t0    4996695 protocol: TCP
nph-zms   797268 797274 nph-zms            docker-www    6wW     REG               0,56           0   30299779 /zoneminder/run/zms-1939318.lock
nph-zms   797268 797274 nph-zms            docker-www    7u     sock                0,8         0t0    4992582 protocol: UNIX
User avatar
iconnor
Posts: 2882
Joined: Fri Oct 29, 2010 1:43 am
Location: Toronto
Contact:

Re: ZM 1.36.32 in a container inside a VM generating tons of disk writes

Post by iconnor »

They are generating buffered jpegs.

In the monitor settings make sure StreamReplayBuffer=0. However zmninja will override this and tell the monitor to buffer them.
You need to make sure that ZM_SWAP_PATH is pointing to a ram disk to mitigate this load.

I think I need to remove this feature entirely.
maddios
Posts: 30
Joined: Wed Oct 27, 2021 7:18 am

Re: ZM 1.36.32 in a container inside a VM generating tons of disk writes

Post by maddios »

Where do I define ZM_SWAP_PATH? I tried adding an ENV variable and setting it to something like /dev/shm2 which is pointing at a /mnt/ramdrive on my host but it's still writing to /dev/shm
maddios
Posts: 30
Joined: Wed Oct 27, 2021 7:18 am

Re: ZM 1.36.32 in a container inside a VM generating tons of disk writes

Post by maddios »

Tried adding it to zm.conf but same thing.
maddios
Posts: 30
Joined: Wed Oct 27, 2021 7:18 am

Re: ZM 1.36.32 in a container inside a VM generating tons of disk writes

Post by maddios »

Oh I found it and it seems to be working, changed it in conf.d/01-system-paths.conf to my /dev/shm2 path and i'm no longer getting crazy drive activity.

Now I just need to somehow make this stick next time i do docker-compose and not forget about it.
maddios
Posts: 30
Joined: Wed Oct 27, 2021 7:18 am

Re: ZM 1.36.32 in a container inside a VM generating tons of disk writes

Post by maddios »

Oh looks like that's fine, the container stores the config on the host anyhow.

One more question regarding this, what's a "good" size for this ramdisk?
maddios
Posts: 30
Joined: Wed Oct 27, 2021 7:18 am

Re: ZM 1.36.32 in a container inside a VM generating tons of disk writes

Post by maddios »

Looks like StreamReplayBuffer is being ignored when nph-zms is streaming. it always buffers 1000 images. in my case that's about 1GB per client (5 streams each) So I have to allocate quite a bit.

What happens if I point the ZM_SWAP_PATH to a bad folder? I noticed it throws an error but then proceeds to work still. Would this break some functionality or just be annoying with the error messages?
maddios
Posts: 30
Joined: Wed Oct 27, 2021 7:18 am

Re: ZM 1.36.32 in a container inside a VM generating tons of disk writes

Post by maddios »

OOPS, so seems like the buffer issue was me, I don't use zmninja and have a webpage on each of my tablets which loads the 5 streams I need, so I just have 5 nph-zms imgs on the page streaming, those had buffer=1000 hardcoded into them.
Post Reply