Detect velocity of anomalies

Anything you want added or changed in future versions of ZoneMinder? Post here and there's a chance it will get in! Search to make sure it hasn't already been requested.
Post Reply
adrianmay
Posts: 4
Joined: Sun Apr 19, 2020 6:58 am

Detect velocity of anomalies

Post by adrianmay » Sun Apr 19, 2020 7:42 am

Hi All,

I was just reading through Zone::CheckAlarms, and it suddenly struck me that it would not be terribly hard to detect anomalies that move through the image in a particular rough direction over time. I think that typifies what people mean by "intruder" and would filter out lighting changes completely (with which I have some trouble, especially when there are small patches of sunlight.)

The easiest alg I can think of would be like this: after the three existing pixel stages, calculate a centre of gravity of the alarmed pixels/blobs and store it in the zone. Over multiple calls, calculate the velocity of the centre of gravity and store that too. Set an alarm if the velocity is roughly consistent over multiple calls and lies within a speed range. That's the rough idea but some extra polishing would also be worthwhile.

This is just a quick brainstorm to see if anybody thinks this would be an improvement and how much extra work would be required. Maybe other people make suggestions and morph it into something different from this initial idea. Or maybe you already explored this line of thought and for some reason I'm unaware of it doesn't actually work. I can see that the extra parameters and score need to be added to the zone settings UI, presets and storage. The CofG and velocity need to be set to invalid when all is calm. But it doesn't seem like a huge effort and the current architecture would naturally support it.

I'm afraid I can't really offer up much time in the long term cos my job takes up 50 hours a week and I have kids. But I can code C++, I can build it in the Arch AUR package (don't have any other distros but I run it in docker) and I'd be game to punch in the core alg. OTOH the UI stuff would probably be more efficiently done by somebody who already knows their way around.

Any interest at all?
Adrian.

PS: I just noticed my box alarming constantly because of clouds blowing across the sky and reflecting in my car windows. The above wouldn't filter that out, but the fix would be to require some variation in the velocity. It was also responding to moving shadows of trees on the garden path which is the main zone of interest - don't see how that can be filtered out without something like the above.

mikb
Posts: 457
Joined: Mon Mar 25, 2013 12:34 pm

Re: Detect velocity of anomalies

Post by mikb » Sun Apr 19, 2020 4:02 pm

adrianmay wrote:
Sun Apr 19, 2020 7:42 am
calculate a centre of gravity of the alarmed pixels/blobs
...
That's the rough idea but some extra polishing would also be worthwhile.
Be aware this algorithm will fail if a burglar walks directly towards the camera (sure, they will get bigger and Bigger and BIGGER, but their centre-of-blob will remain quite still). You need to also include the "loom factor" here!

This is a basic way you work out if an object that is coming towards you will hit you. Track the velocity of centre of apparent movement, measure the rate-of-change of the apparent width of the object.

And hope that the centre is moving > 50% faster than the object is getting wider. Otherwise, duck!

adrianmay
Posts: 4
Joined: Sun Apr 19, 2020 6:58 am

Re: Detect velocity of anomalies

Post by adrianmay » Sun Apr 19, 2020 5:34 pm

I guess most people, like me, have their cams mounted out of reach and pointing downwards. They tend to have wide angle lenses too, so by the time the burglar gets to the door you're pretty much looking down on his head. Also, he had to come walking down the street first.

All the same, I could track the number of anomalous pixels, or some enclosing shape, and respond to a consistent trend. Thanks for the nudge.

When I say anomalous, I mean compared with a long term set of typical values. I'm worried about the case that you're looking at green leaves waving against a blue sky. Perhaps I make a palette and allow each pixel to say that it often sees two or three entries. Haven't figured that bit out yet.

What I also thought of in the meantime is that since moving shadows create so much aggro, why not convert to hue, saturation and luminance, then just discard the luminance. It looks like the current alg does exactly the opposite cos delta_image is set up monochrome in Monitor::Monitor, but I'm not sure if that gets overwritten later. There are cases it won't catch, like if all the world is grey, but the more junk we filter out, the more sensitive we can make every other part of the alg. The threshold for colour changes can be made more sensitive at low saturation. My guess is that the pros will outweigh the cons.

And I whacked the resolution down by a factor of 8 each way cos I see no reason why motion detection needs super high res. Ooh, that simplifies the palette cos you'll get all the values between the green and the blue anyway, so it's just an enclosing shape in colour space, which is 2D if I dump the luminance.

Still brainstorming really.

mikb
Posts: 457
Joined: Mon Mar 25, 2013 12:34 pm

Re: Detect velocity of anomalies

Post by mikb » Mon Apr 20, 2020 4:58 pm

adrianmay wrote:
Sun Apr 19, 2020 5:34 pm
I guess most people, like me, have their cams mounted out of reach and pointing downwards.
True, but there are plenty of e.g. Amazon style doorbell cameras that point forward at around face height. That's the sort of thing I was thinking of.
adrianmay wrote:
Sun Apr 19, 2020 5:34 pm
There are cases it won't catch, like if all the world is grey,
Also true. If the world is all grey already, then it's foggy out :) But consider if the world is all black. Suddenly. This means someone has come in out of view of the camera and stuck chewing gum on the lens. Not a joke, this happened to a BBC local radio producer, some local miscreant took exception to his camera and nobbled it.
adrianmay wrote:
Sun Apr 19, 2020 5:34 pm
And I whacked the resolution down by a factor of 8 each way cos I see no reason why motion detection needs super high res
A fair point, and it depends on the original resolution. These HD-1080p/2.7k/4K cameras provide a LOT more detail than needed to motion detect (which is why some people use ZM to monitor a low-res stream for motion detection, and use it to trigger the full-res screen for recording). Maybe instead of a permanent divide by 8, work out what a reasonable percentage size is, and always maintain a grid of e.g. 10% squares.

For example, a PIR sensor is remarkably low-resolution (equivalent of low tens of pixels across, 3-5 pixels up), yet it can see things "moving" across its reduced number of zones (unless they are coming straight at it again!)

adrianmay
Posts: 4
Joined: Sun Apr 19, 2020 6:58 am

Re: Detect velocity of anomalies

Post by adrianmay » Tue Apr 21, 2020 8:09 am

Indeed, the fixed factor of 8 is just a temporary hack. I should aim for a certain resolution (~100 each way), but I'd like to stick to powers of 2 for performance reasons and keep the aspect ratio.

I do intend to do something with the loom factor and it should go nuts if a lump of chewing gum approaches. I was planning to set the alarm if the velocity vector has a roughly constant direction (I mean, within about 90^) for 2 or 3 consecutive frames. (The guy with the gum must have walked up to the doorbell first.) Loom would just be a third dimension for that velocity.

In the meantime I gave up on the hue stuff and had another idea. I should diff the image by time and then by space. That means I'll be picking out sharp edges of the changing bits. That'll take care of clouds racing across the sky cos none of the edges are sharp.

That leaves the problem of detecting a red ball in front of green leaves waving against a blue sky, or intruders walking along a path where the trees cast moving shadows. My current best idea goes like this. For each pixel, I remember two points in colour space that are supposed to be close to the extremes of colour we've seen at that pixel. To maintain them, when a new sample comes in, I pick the closer of the two points and move it towards the new sample, but less so if that would mean moving it towards to other point. They slowly gravitate together over a few minutes. To test a pixel for alarmedness, I figure out the distance of the sample colour from the line between the two points, i.e., not including the extensions of the line beyond the points. In a global alarm state the points don't get updated as fast.

I say two points cos most real world examples are alternating between two colours and the alg gets exponentially more expensive (and hard to figure out) in the number of points. I might consider three though - it's not so hard - I rotate everything til the three ref points are flat in the XY plane, check the magnitude of the Z of the sample, then translate til the middle of the ref triangle is at the origin, find the normals from the origin to the three sides, get the projections of the sample onto the normals, and show that they don't reach far beyond the sides.

Now it just occurred to me that with this long term characterisation of pixels, I can distinguish changes away from typical colours from changes back to normality. That will find the front side and back side of a moving object. Oh what fun!

I'm doing the code in a hacky way with a big #define to switch to the new alg. Obviously it couldn't be merged in that form. The old alg is not playing a role anymore cos even stage 1 is different. It doesn't have settings corresponding to current ones (although the zone polygon remains unchanged.) The settings are all related to velocity and direction. (My hack might just abuse current settings for these cos I don't want to wade into the db.) It uses memory differently: I don't need to keep the full-res image between cycles (but maybe something else in the system does), instead I have this low-res pixel history and a bunch of recent CofG and velocity values in the zone.

So all that raises a fairly deep question of how it should be brought into the current SW if folks are interested. All that stuff would be over my head, cos I know neither the UI nor the architecture particularly well, and like I said, I kinda need to prioritise on the day job and the kids. I just wanted to highlight that it's more than a tweak and involves big decisions, so I'll just offer this little proof of concept (if and when it works) and get back to work. I do feel a bit guilty about shirking the boring bits, but I guess I'm contributing more than most users who I imagine don't even allow the usage/bug reports, and I really don't have a choice here, especially when covid means the kids need more attention cos they can't see their friends. I only bought the cam in the first place to find out who's letting their dog poo in my garden, but in the meantime I suspect it was a cat.

mikb
Posts: 457
Joined: Mon Mar 25, 2013 12:34 pm

Re: Detect velocity of anomalies

Post by mikb » Tue Apr 21, 2020 3:12 pm

adrianmay wrote:
Tue Apr 21, 2020 8:09 am
(The guy with the gum must have walked up to the doorbell first.) Loom would just be a third dimension for that velocity.
That would be the obvious first response, unfortunately he/she had some crab DNA, and came in from the side so as to evade capture. I'm not making this up ;)
adrianmay wrote:
Tue Apr 21, 2020 8:09 am
I'm doing the code in a hacky way with a big #define to switch to the new alg. Obviously it couldn't be merged in that form.
I see nothing wrong with doing that, as proof of concept to see what does and doesn't work.

Post Reply

Who is online

Users browsing this forum: No registered users and 5 guests