How best to split modect processes across multiple servers?

Forum for questions and support relating to the 1.24.x releases only.
Locked
bthom73
Posts: 1
Joined: Mon Jul 26, 2010 11:45 pm

How best to split modect processes across multiple servers?

Post by bthom73 »

Hello all, I've searched through the forums and did see several somewhat related queries but no clear answers on how the following might be doable...

Basically what I'd like to do is install a quantity of Zoneminder back-end capture machines which do the "modect" motion detection and capture the video from a large number of IP cameras. I'd then like those capture machines to store the captured video on a central fileserver (most likely a shared NFS filesystem). That central fileserver would then also be accessible by a single "front-end" Zoneminder server such that several users could log in and see all of the cameras listed all in one table such that they can click on any specific camera and view recorded event data associated with that camera. Just to clarify, the users logging into the front-end machine don't need access to any live views, only access to previous motion events associated with that camera.

The question is - how best to split up the processes such that the above can be accomplished? Should there be only one MySQL database shared by the entire cluster or does each capture server need its own database and only share the underlying video data at filesystem level?

I don't mind having to logically add each camera twice (once to the appropriate back-end capture server doing the capturing and again to the front-end server to make the same camera user accessible) but obviously it would be nicer if that could be avoided.

Again, my question is purely software related. I'll deal with the hardware/performance/scaling issues separately, but my focus at the moment is how best to split up the modect software processing across multiple servers while then still being able to make all of the recorded motion event data later accessible in one place.

If anyone has a clear understanding of how all of the daemons fit together and at what point in the event recording chain might be the best spot to split up the processing, your input would be greatly appreciated.

Thanks,
Brian
John Bowling
Posts: 15
Joined: Mon May 21, 2007 3:08 pm

Just some thought about it

Post by John Bowling »

I would try multiple primary input ZoneMinder computers and do a softlink of .../zm/events/ to the NFS system with a raided drive setup.

Two problems I can see. You have to make sure the systems have unique names and numbers for each camera set up, so you don't duplicate 1,2,3,etc in the NFS drive. I don't know if there is a method of specifying the number of the camera, but it apparently does not duplicate if you delete it and add it back in. That slow and painfull method could assure computer 1's cameras have different numbers than computer 2. That may resolve itself if you get the database setup shared prior to any monitor setup.

Then you would have to have another ZoneMinder computer that does not have camera inputs but is able to access all of the events that would probably have the browser as localhost. It should be the one that maintains the database local. I suspect the database has some information that is required for each camera input server, including the passwords. Perhaps the database needs to be split up for computer specific local and event common.

Then you also would have the problem of viewing them all at once. 4 independent monitors? Perhaps Xfce could do that. I don't think that KDE and Gnome have that feature yet.
mitch
Posts: 169
Joined: Thu Apr 30, 2009 4:18 am

Post by mitch »

I think it may be a better idea to try one shared database among several servers, all you are looking to do is really avoid it starting the zmc daemon (and zma technically to save the log files). I would add a field in the database for what server the camera is supposed to run on, then in the daemon that starts them just do a check of does the current server equal the server in the database if so start like normal, if not skip them.

That would probably be the easiest way as it will ensure minimal work to make everything go. Now live stream viewing wouldn't work (that would have to occur on the server the camera is capture on) but I assume you are aware of that and everything else should work just fine.
gessel
Posts: 26
Joined: Sat Jan 24, 2004 12:24 am

Re: How best to split modect processes across multiple serve

Post by gessel »

It has been a long time since this was posted, but clustering zone minder is of ongoing interest to me. One of the bad things about using MySQL rather than PostgreSQL is table locking, but I assume that DB clustering issues can be solved using standard MySQL scaling techniques and a remote database.

That leaves setting up a cluster of capture/analysis servers each capable of handing X cameras (where X seems to be about 24ish, maybe 48 these days). It would be very interesting if ZM could be clustered to support, say, 200 capture servers with 48 cameras each, at least as a theoretical scaling effort - limited in the end, say, by peak write throughput to, say, a db write cluster on 10G ethernet... which should be a large number of cameras assuming (and assuming they never all trigger at once with so many connected).

If anyone has set it up with 2 capture servers and a db server, it would be a long way toward proof of concept for very large scaling.
Locked