Raspberry pi3 and h264_mmal

Forum for questions and support relating to the 1.29.x releases only.
cmisip
Posts: 38
Joined: Sun Apr 30, 2017 10:09 pm

Re: Raspberry pi3 and h264_mmal

Post by cmisip » Thu Aug 03, 2017 2:26 am

I think the alignment trap errors are gone. I've made progress with hardware swscale replacement. The mmal splitter component can do format conversions so I am using that to convert to RGBA, RGB24, YUV420 (grayscale obtained by getting only the luma). The mmal resizer component only supports RGBA. The two will have to be chained to create the full swscale replacment. As it stands right now, the RPI3 is able to decode multiple streams in hardware without use of ffmpeg swscale ( but with no resizing yet). It turns out also that the RPI GPU can only support specific resolutions in multiples of 16. 640x360 is rounded to 640x368. This leads to directbuffer or mv_buffer being not adequately sized to receive the data if I use width and height in the cases when the mmal components rounds to the next multiple of 16. I noticed the problem with a green band on the lower part of the image in zms with that resolution. The problem went away when I switched to 704x480. I will have to make a change to account for that by using encoder values instead of user inputted width and height. It turns out that there is no jpeg turbo available in raspbian that is version 8. Needed to repackage the jpeg 62 turbo version with jpeg 8 ABI into files that satisfy zoneminder's dependency requirements.

Chris

User avatar
iconnor
Posts: 487
Joined: Fri Oct 29, 2010 1:43 am
Location: Toronto
Contact:

Re: Raspberry pi3 and h264_mmal

Post by iconnor » Thu Aug 03, 2017 1:27 pm

Sounds like great work. I spent an annoying amount of time yesterday trying to get h264_qsv to work. In the end I don't think it ever will as it seems to require an active X display. Pplayed a bit more with h264_vaapi.

My concern at this point is code structure. Seems like each hwaccell codec is going to require a lot of extra code to support. Need to structure the code well. I will be taking a look at your code in the very near future.

cmisip
Posts: 38
Joined: Sun Apr 30, 2017 10:09 pm

Re: Raspberry pi3 and h264_mmal

Post by cmisip » Fri Aug 04, 2017 1:01 pm

This is my first time working on an open source project and first time using github and truthfully, I did not check the docs for code structure requirements. And I did make significant changes in a lot of the source code. Hopefully it is not too big of a mess to sort out. At least the mechanics should be evident. I did disable some of the neon code in zm_image.cpp. Didn't want to do that but I would not be able to build a working package otherwise. Neon code gets compiled in even if ZM_STRIP_NEON is defined resulting in a Panic. Those changes would have to be reversed once neon is up and running on raspbian.

I am also using this thread to post ideas and solutions to obstacles that I've encountered so that If the code ever gets merged in some form, it would help others to have some notes on implementation. MMAL development is kinda cryptic. There is not much formal documentation but there is a good team of engineers in the RPI forums trying to answer sensible questions.

I am currently trying to implement a pipleline :

Code: Select all

 resizer -> splitter output 0 -> directbuffer    
                splitter output 1 -> h264 encoder -> mvect_buffer.
I am finding out that resizer does not work with MMAL_ENCODING_OPAQUE format which means full data buffer copies need to happen between connected components in the pipeline. Likewise, the h264 encoder did not work with MMAL_ENCODING_OPAQUE. Wether as a consequence of being in a pipeline or it is just not designed to be used that way, I am not sure. Since full copy of buffers is required, which entailed a trip to ARM space instead of staying in the GPU and the cost of the copy itself, resizing would entail a performance hit. But since it is in series with the ffmpeg decoder that feeds it buffers (copy operation), the pipeline speed depends on the speed of the ffmpeg decoder as well. Long story short, its best to avoid resizing and have the camera just feed the appropriately sized video frame to zoneminder. A check needs to be made to see if the user supplied width and height matches the decode context width and height and that the video frame x and y resolution is a multiple of 16. And I think I already did mention earlier that the resizer only outputs RGBA or I420 format which is why the splitter connection was necessary as the splitter can do format conversion from I420 to RGBA and RGB24 (but can't resize). I am doing all this in a small test code so its not in the branch yet. Hope to get it done soon once (if) I get it working.

There are optimizations that could be done down the road such as :
1. Reducing the size of the video frame sent to the encoder in order to reduce the number of motion_vectors that need to be processed. This should greatly reduce the cpu utilization. Then have a full unresized frame sent to directbuffer for streaming. But that entails two resizers in the pipeline.
2. Use the hardware jpeg encoder. I don't know where to put it yet. Probably don't want to put it in zmc as that means jpeg buffers need to be mmaped as well increasing memory requirements. Plus, dont really need to jpeg encode all frames. Perhaps it could be in zms.
3. A native mmal h264 decoder can be implemented which would remove reliance on specially built ffmpeg.
4. The h264 encoder is already being used to output motion vectors, why not process the video frame data into an h264 stream.
5. Software and hardware motion vector processing require different buffer sizes. The hardware side requires a smaller mvect_buffer especially if resolution is downscaled prior to being sent to the encoder.
6. Only engage the zma mvect_buffer motion analysis if we think we have enough vectors to satisfy a minimum user specified score.
7. Second tier motion_vector validation by checking for minimum sized clusters. (performance hit).
8. Third tier motion_vector validation by checking if vectors in a cluster are oriented in the same direction (performance and memory hit as the magnitude of vectors need to be added to the motion_vector struct).

Bugs:
1. I notice that the frames in zms are sometimes out of order. Don't know how to fix that yet. Perhaps having a single pipeline with one entry point would fix the issue. Right now buffers are supplied to two entry points: encoder input and splitter input.
2. I had to make some changes to zmpkg.pl (not in the actual code branch) to get zoneminder to start

Code: Select all

#Fix zmpkg.pl error with zmsystemctl.pl
cd scripts
sed -i -- '/systemdRunning.*calledBysystem/,+3d'  zmpkg.pl.in
cd ..
3. The web UI stops working after a while ( at least 6 hours running ). Don't know if this is related to Bug 2 above. However, systemctl reports that it is still running and the processes are still alive and can be accessed with an app. I fudged this by always having daemoncheck return a 1 so the web ui thinks that zoneminder is running. The camera IP addresses are red but the Monitors are clickable and work fine.

Seems like the RPI is a very viable stand alone system for zoneminder. Kinda like zoneminder in a box once further optimizations are implemented. When RPI4 rolls out hopefully, all the issues have been ironed out.

I have just another week to work on this project because I am losing the cameras. They have to be shipped out. I'll do what I can. I can supply a package for people to test if I there is a place to host them and it doesn't violate any rules. However I wont be able to do any development for a while. I will also try to put up a howto on how to get it working. No guarantees.

Chris

User avatar
iconnor
Posts: 487
Joined: Fri Oct 29, 2010 1:43 am
Location: Toronto
Contact:

Re: Raspberry pi3 and h264_mmal

Post by iconnor » Fri Aug 04, 2017 2:22 pm

Can you continue using a remote camera? Lots of online streams. Alternatively, we could probably ship you something or fund you. Another alternative would be to use a pre-recorded file as input.

I will try to work with mastertheknife to get the neon stuff fixed up.

In terms of a jpg encoder, it would be nice to have that in zms, although zma does a fair amount of jpg writing as well. (We do zone dump images, and writing of analysis images, etc).

The frames out of order in zms is interesting. I have seen that when running two copies of zmc simultaneously.

cmisip
Posts: 38
Joined: Sun Apr 30, 2017 10:09 pm

Re: Raspberry pi3 and h264_mmal

Post by cmisip » Sun Aug 06, 2017 1:43 am

I'll probably take a break for a few weeks. A lot of the work is already in the motion_vectors branch. It's not clean yet. But its got basic functionality. I suggest that you guys look at the code first and see if there is anything worth merging. I could always be "persuaded" to continue but I can't promise anything.

I really learn as I go. It turns out that there is a super secret efficient resizer in mmal that is undocumented in the headers and is present in the latest firmware. The vc.ril.isp can resize and convert to RGBA, RGB24 and GRAY8 (via I420). So there is no need for a splitter component. I also found out that mmal can handle any resolution and it does not need to be a multiple of 16. This is because VCOS_ALIGN_UP(width,32) and VCOS_ALIGN_UP(height,16) increases the buffer resolution to these multiples. The resulting converted buffer is bigger and the actual frame data in its original resolution is embedded in it with some padding. So the best way to handle hardware resizing is to add vcos_align_up in zm_monitor.cpp just before the creation of ffmpeg camera object. That way, the picture buffer allocated in zm_image will be appropriately sized for the inflated buffer. Otherwise, if the adjustment is done inside ffmpeg_camera object, zoneminder complains about the held buffer being too small and fails. This also ensures that the directbuffer is large enough to be memcpied to from mmal buffer. Otherwise, the software jpeg encoder gets confused and outputs garbage. Changing the resolution does mean that the zone dimensions need to be adjusted to fit the scaled down or scaled up frame. I haven't put that in yet. Somehow, that resolution change information needs to get to zma so it could scale the zone polygons. Or the zone polygons need to be automatically adjusted with each resolution change.

The alignment_trap shows up every now and then so it is not completely gone but I dont see it frequently. Somebody else who knows more about that stuff could look at the code. It happens only on some shutdowns and only on some pairs of zma's and zmc's.

I will see if I can get the parallel branch motion_vectors_pipeline up and running. This will utilize connections between components so the buffer is fed only in one entry point and multiple exit points can output specific buffer data types. This is what I was working on until I found out about vc.ril.isp. The connections will be simpler than I envisioned because of vc.ril.isp. I think I will enable the splitter component and connect its first output to the vc.ril.isp to resize/format_convert then save in directbuffer while splitter output 2 will connect to another vc.ril.isp component to downscale before sending the video to the encoder. That way, we have higher resolution frames in zms while zma is analyzing vectors from a lower resolution frame. As I mentioned earlier, there might be a performance penalty here because full data buffers are copied across components which is why I created a separate branch. I have to see if the splitter component can pass opaque buffers to vc.ril.isp. That would at least remove one full data buffer copy.

Regarding the out of sequence frames in zms, a pts value can be returned by mmal (if buffer->pts value is supplied to the mmal input side, perhaps using the frame number as a pts value ) and it could be saved along with directbuffer or a separate class member can save the value for each image. That might help with sorting frames when building the zms event stream.

Chris

User avatar
iconnor
Posts: 487
Joined: Fri Oct 29, 2010 1:43 am
Location: Toronto
Contact:

Re: Raspberry pi3 and h264_mmal

Post by iconnor » Tue Aug 08, 2017 2:04 am

Ok, the thing is, if you are just going to drop this, then we will never merge. We have too many cases of merging code, that none of understand, and the author is gone.

Our job here is to encourage you to continue, and stick around. How can we do that? What do you need? By all means, take a break, but PLEASE stick around. This is exciting stuff.

cmisip
Posts: 38
Joined: Sun Apr 30, 2017 10:09 pm

Re: Raspberry pi3 and h264_mmal

Post by cmisip » Tue Aug 08, 2017 3:18 am

I deleted motion_vectors_pipeline but created motion_vectors_connect. That motion_vectors_connect branch uses mmal connections between components instead of two separate pipelines like in motion_vectors. The last update on motion_vectors was heavily commented for reference. motion_vectors_connect is a different design decision and I still am not sure performance wise which is better.

Motion_vectors implemented:

Code: Select all

   avcodec --> h264encoder -->mvect_buffer
    
   avcodec --> resizer -->directbuffer
With motion_vectors_connect. I implemented the following:

Code: Select all

    avcodec -->splitter -> resizere -> h264 encoder -> mvect_buffer
                |
                 -------> resizerd -> directbuffer


The resizere component downscales the video prior to motion vector extraction. The downscale is either 320x240, 640x480 or 960x720 which is user configurable. I used the Options field in the Source tab of Monitor to specify low, medium or high. I figured out how to rescale the zone polygons at the initiation of Load Monitor so the information about the size of the frame does not need to be passed to zma anymore via mmap. This also meant that the rescaling of the zones does not need to happen during the polygon inclusion test on zma which would have been a performance hit.

Things are working but I am still bugged by the out of sequence frames. Don't know what is causing that. There is only one buffer feeding the pipeline and buffers received from resizerd are checked to see if the pts value is not older than avcodec frameCount.

I found a clue though. When watching a live stream from the camera using zms, I noticed the time stamps on the upper right move back in time and jump forward again. Would you have an idea what could be causing that? It does not happen all the time. The event playback would also do the same thing seemingly to randomly jump back and forth.

I will be sticking around after taking a break. I am learning a lot doing this so I do have some benefit from it. I am just not sure of the value of this work eventually to your project as I don't know if it ever will be practical.

Chris

cmisip
Posts: 38
Joined: Sun Apr 30, 2017 10:09 pm

Re: Raspberry pi3 and h264_mmal

Post by cmisip » Tue Aug 08, 2017 11:15 am

Fingers crossed but I think I may have found the reason for frame sequence being messed up. I looked at an event that triggered this morning and it had frames from an event last night. I think maybe its due to the directbuffer being held. I set that to false and in the events that I checked so far, the problem seems to have gone away. I will keep the system running until the end of the day to see if this does truly fix the problem. I wonder what is causing this though. I would have expected the sequence of image_buffer objects will always have their directbuffers overwritten with new buffer data from mmal. I guess some image_buffer objects are not being overwritten. I think maybe, I need to zero out the held buffers at least to keep from having to reallocate but I don't know how the event streamer will handle that. Will it simply skip the image_buffer objects with zeroed out directbuffers?

Chris

cmisip
Posts: 38
Joined: Sun Apr 30, 2017 10:09 pm

Re: Raspberry pi3 and h264_mmal

Post by cmisip » Tue Aug 08, 2017 4:42 pm

I think that is it. I'll see if I can get it to work with holding directbuffer but zeroing it out with memset. Finally, it is becoming more useable and practical.

Chris

cmisip
Posts: 38
Joined: Sun Apr 30, 2017 10:09 pm

Re: Raspberry pi3 and h264_mmal

Post by cmisip » Tue Aug 08, 2017 11:07 pm

Drats. Thats not it.

So I disabled the use of vc.ril.isp for now and restored the use of swscale for hardware side. Motion detection does work for both software and hardware decode. The dowscale part before the hardware encoder works, as well as the use of the Options field in the Source tab for the downscale mode (even though it complains about invalid options when the options are checked against valid ffmpeg options. The only part that does not work is the copying of the buffer data to directbuffer from vc.ril.isp. Somehow it messes up the event streamer with the timestamp going backwards and jumping forwards. This doesn't happen with swscale so the problem is probably on the mmal side. This is probably my last attempt for a little while. I hope I didn't break anything else on the last update to motion_vectors_connect.

Thanks,
Chris

User avatar
iconnor
Posts: 487
Joined: Fri Oct 29, 2010 1:43 am
Location: Toronto
Contact:

Re: Raspberry pi3 and h264_mmal

Post by iconnor » Fri Oct 13, 2017 10:14 am

I'm going to be looking at merging your work. Mostly for use with an nvidia tegra board.

User avatar
iconnor
Posts: 487
Joined: Fri Oct 29, 2010 1:43 am
Location: Toronto
Contact:

Re: Raspberry pi3 and h264_mmal

Post by iconnor » Thu Feb 22, 2018 5:28 pm

I have finally added the few lines required to use h264_mmal for h264 decoding. It seems to work quite well.

snake
Posts: 78
Joined: Sat May 21, 2016 2:20 am

Re: Raspberry pi3 and h264_mmal

Post by snake » Fri Feb 23, 2018 1:03 am

I'm going to be testing this out. I'll write up some how to on it, if I can get it working.

cmisip
Posts: 38
Joined: Sun Apr 30, 2017 10:09 pm

Re: Raspberry pi3 and h264_mmal

Post by cmisip » Wed Jul 11, 2018 10:51 pm

Here is a howto for building this. It's an earlier version of the fork which does not use mmal component connections. I thought I'd work on a simpler version. Before you invest time or effort or hardware in this, understand that it may not work. Do so at your own risk. :)

1. Install Raspbian Lite Stretch. Update the system

Code: Select all

sudo apt-get update
sudo apt-get dist-upgrade
2. Install Zoneminder and its dependencies including ffmpeg

Code: Select all

sudo apt-get install zoneminder
*ffmpeg is installed as a dependency and it is already compiled with mmal support.

3. Allow user pi and www-data to access the video encoder

Code: Select all

sudo usermod -a -G video pi
sudo usermod -a -G video www-data
4.Install other dependencies of Zoneminder

Code: Select all

sudo apt-get install libmp4v2-2 
sudo apt-get install libapache2-mod-fcgid
sudo apt-get install libexpect-perl
sudo apt-get install netpbm 
sudo apt-get install php-gd
sudo apt-get install libssl-dev
*You probably ended up with libjpeg62-turbo and jpeg8 as a dependency. Uninstall zoneminder and uninstall libjpeg8.

Code: Select all

sudo apt-get remove zoneminder
sudo apt-get remove libjpeg8 

*Build jpegturbo with jpeg8 support

Code: Select all

sudo apt-get build dep libjpeg62-turbo
*edit debian/rules and add --with-jpeg8 to the following line

Code: Select all

   dh_auto_configure -- --with-build-date=$(DEB_VERSION) $(DISABLE_SIMD) --with-jpeg8
*edit debian/libjpeg62-turbo.install and change the entry to:

Code: Select all

   usr/lib/arm-linux-gnueabihf/libjpeg.so.8*
Build libturbo jpeg

Code: Select all

debuild -us -uc -b

*Install the packages.

Code: Select all

sudo dpkg -i ./libjpeg-dev_1.5.1-2_all.deb
sudo dpkg -i ./libjpeg62-turbo-dev_1.5.1-2_armhf.deb
sudo dpkg -i ./libturbojpeg0-dev_1.5.1-2_armhf.deb
sudo dpkg -i ./libjpeg62-turbo_1.5.1-2_armhf.deb
sudo dpkg -i ./libturbojpeg0_1.5.1-2_armhf.deb
sudo dpkg -i ./libjpeg-turbo-progs_1.5.1-2_armhf.deb
*Just remember not to let a system update overwrite these packages with the official ones with jpeg62 support.


5.Build and install the libraries for RPI userland

Code: Select all

sudo apt-get install git
sudo apt-get install cmake

mkdir raspberrypi
cd raspberrypi
git clone https://github.com/raspberrypi/userland.git
cd userland
./buildme
6. Zoneminder looks for a systemd service when started, so create it here

Code: Select all

sudo  nano /lib/systemd/system/zoneminder.service
*Paste the following

Code: Select all


# ZoneMinder systemd unit file
# This file is intended to work with all Linux distributions
[Unit]
Description=ZoneMinder CCTV recording and security system
After=network.target mysql.service apache2.service
Requires=mysql.service apache2.service
[Service]
User=www-data
Type=forking
ExecStart=/usr/bin/zmpkg.pl start
ExecReload=/usr/bin/zmpkg.pl restart
ExecStop=/usr/bin/zmpkg.pl stop
PIDFile=/var/run/zm/zm.pid
[Install]
WantedBy=multi-user.target
7. Create the tmpfiles directory with systemd

*Create a file called zoneminder.conf

Code: Select all

sudo nano /etc/tmpfiles.d/zoneminder.conf
*Paste or enter the following to above file

Code: Select all

d /var/run/zm 0755 www-data www-data
d /var/tmp/zm 0755 www-data www-data
*Then save the file.

*Change permissions on the file

Code: Select all

sudo chmod 755 /etc/tmpfiles.d/zoneminder.conf
* Let systemd process the file

Code: Select all

sudo /bin/systemd-tmpfiles --create /etc/tmpfiles.d/zoneminder.conf
* Enable the zoneminder systemd service

Code: Select all

sudo systemctl enable zoneminder.service
8. Make the changes to apache2

* Enable some modules

Code: Select all

sudo a2enmod cgi
sudo a2enmod rewrite
sudo systemctl reload apache2
*If zoneminder.conf is not enabled by now in apache2, enable it and reload apache.
*Verify that zoneminder.conf is in /etc/apache2/conf-available, then

Code: Select all

sudo a2enconf zoneminder
sudo systemctl reload apache2
9. Adust time/date in php.ini

Code: Select all

sudo nano /etc/php/7.0/apache2/php.ini
sudo chown -R www-data:www-data /usr/share/zoneminder/
10. The database zm should have been created by now. If not create with:

Code: Select all

mysql < /usr/share/zoneminder/db/zm_create.sql 
mysql -e "grant select,insert,update,delete,create on zm.* to 'zmuser'@localhost identified by 'zmpass';"
*Modify the tables

Code: Select all

sudo mysql -uroot
use zm;

ALTER TABLE Monitors modify column Type enum('Local','Remote','File','Ffmpeg','Ffmpeghw','Libvlc','cURL') NOT NULL default 'Local';
ALTER TABLE MonitorPresets modify column Type enum('Local','Remote','File','Ffmpeg','Ffmpeghw','Libvlc','cURL') NOT NULL default 'Local';
ALTER TABLE Controls modify column Type enum('Local','Remote','Ffmpeg','Ffmpeghw','Libvlc','cURL') NOT NULL default 'Local';
alter table Monitors modify column Function enum('None','Monitor','Modect','Mvdect','Record','Mocord','Nodect') NOT NULL default 'Monitor';


11. Install the modified zoneminder.deb. Hopefully postinst completes with no issue.

Code: Select all

sudo dpkg -i ./zoneminder.deb
*If the postinst script gets stuck and asks to do a "sudo dpkg --configure -a" which just loops the postinst script, then

Code: Select all

sudo dpkg --remove --force-remove-reinstreq zoneminder
*ZM complains of unreadable zm.conf

Code: Select all

sudo chmod 740 /etc/zm/zm.conf
sudo chown root:www-data /etc/zm/zm.conf

*Can't access the webserver if UFW is in use

Code: Select all

sudo ufw allow http
*ZM complains of not having write access to events and images directories

Code: Select all

sudo chown -R www-data:www-data /usr/share/zoneminder/
*The events, images and temp directories are symlinks to /var/cache/zoneminder

Code: Select all

sudo chown -R www-data:www-data /var/cache/zoneminder/
*Cannot validate swap, disabling buffered playback,
Edit /etc/zm/zm.conf and add

Code: Select all

ZM_PATH_SWAP=/tmp
* After all the above probably best to

Code: Select all

sudo systemctl reload apache2
sudo service zoneminder restart
12. Access the Web UI and

* Add New Monitor

Code: Select all

Select Ffmpeg and Mvdect for software h264 decoding via ffmpeg with motion vector extraction.
Select Ffmpeghw and Mvdect for hardware h264 decoding via ffmpeg using mmal and motion vector extraction via hardware mmal h264 encoder.
Hardware Scaling is used when Source=Ffmpeghw Function=Mvdect
Software Scaling is used when source=Ffmpeg Function=Modect|Mvdect
Software Scaling is used when Source=Ffmpeghw Function=Modect

*For an sv3c camera, the settings are:

Code: Select all

Source path = rtsp://mysv3c.chris.net/12
Remote Method = TCP
Capture width = 640
Capture height = 360


Source path = rtsp://mysv3c.chris.net/11
Remote Method = TCP
Capture width = 1280
Capture height = 720
***24 bit color 1920x1080 possible with Modect and GPU MEM set at 256. This is the highest resolution that I could configure and with just one camera. Lower resolutions could probably get by with GPU MEM 128.

***I could only test with one camera at 704x480. In theory, it should be able to run multiple cameras. Just keep an eye on memory consumption. The kernel uses 25% of the 1GB RAM already, and with 256 MB set aside for GPU MEM, that leaves ~500 MB for all the cameras, mysql and any other running processes.

My current system is a PI3B with 16GB SDCARD. I don't have a hard disk attached so the jpegwriter and mysql are writing to the SD card.


* Define the Zone settings for this camera.

Code: Select all

Click the Zones associated with the Monitor and Select Alarm Check Method as "AlarmPixels"
Put reasonable value for Min Alarmed Area. This value is computed as percentage of polygon area that needs to be covered
by macroblocks to label the frame as motion detected. Its an all or none score.


***The logic of the motion vector score calculation:
The Min Alarmed Area is taken as a percentage value of the polygon total area. Each vector is weighted as having equivalent of 16x16 pixels and each vector is given 4x weight, meaning a 25% pixel coverage is considered 100% coverage. The frame is marked as motion detected if the Min Alarmed Area value is reached and then zma quits analysis. The score is therefore all or none, 0 or 100%.

My Settings:

Code: Select all

Camera is set at 10 fps on its web UI
Source Type = Ffmpeghw
Function =  Mvdect
Enabled = checked
Reference Image Blend %ge = No Blending
Alarm Reference Image Blend %ge = No blending (Alarm lasts forever)


Source path = rtsp://mysv3c.chris.net/12
Remote Method = TCP
Target Colorspace = 24 bit color
Capture width = 640
Capture height = 360

Image Buffer size = 150
Warm Up frames = 0
Pre Event Image Count = 10
Post Event Image Count =10

Zoneminder Config
WATCH_MAX_DELAY = 10
WATCH_CHECK_INTERVAL = 20
OPT_ADAPTIVE_SKIP = checked
CREATE_ANALYSIS_FRAMES = unchecked

13. Optimizations

* No need to blend the image because we dont use pixel analysis. This greatly reduces zma's cpu utilization

Code: Select all

Set Reference Image Bled %ge to No Blending.
Set Alarm Reference Image Blend %ge to No blending. 
*Reduce the framerate of your rtsp camera. This will be done on the camera side web UI.

*Edit Monitor Options and under Buffers (Image Buffer Size) increase to a higher number

*Uncheck Create_Analsyis_Images

*Adjust WATCH_MAX_DELAY if it keeps restarting zma.

*Set Warm-Up Frames to 0 since we dont need to generate a reference image.



*Use a fast hard disk to ensure jpeg writing and mysql writes don't delay zma.

*Use grayscale image color depth.


Other Optimizations (dont't know if relevant):

?In Options config OPT_ADAPTIVE_SKIP needs to be checked

?Set Analysis Update Delay

<From Google> Currently, video analysis is performed at full framerate, this can lead to high cpu load.
In order to reduce the cpu load, this PR add an analysis interval parameter to monitors settings.
Default value is 1, all images are processed.
If a greater value is set, for example 5, only images with count multiple of 5 will be processed.

The reason for this is to make it possible to reduce CPU-load if split-second motion detection is not required.

?Set Motion Frame Skip

<From Google> There is a new setting "Motion Frame Skip" under the "Misc" tab in the Monitor settings where you configure the value.
The default value is 0, so the feature is not active by default.
The feature works like Frame Skip. That is, if the value is "5" every 6th image will be analyzed for motion. The reference image is updated every frame to not change the effect of current blend settings.





14. Building the zoneminder deb

*Install the necessary packages for building

Code: Select all

 sudo apt-get install ccache
 sudo apt-get install build-essential fakeroot devscripts
*Install the dependencies for building

Code: Select all

sudo apt-get install libnetpbm10-dev 
sudo apt-get install libvlccore-dev
sudo apt-get install libgcrypt11-dev
sudo apt-get install libssl-dev

sudo apt-get build-dep zoneminder
*Clone the repo

Code: Select all

git clone https://github.com/cmisip/ZoneMinder.git
cd ZoneMinder
There are two working branches right now that I am experimenting with. sws-reboot-4b will have the hardware and software motion vector working. This is an attempt to optimize data transfer using 4 byte writes when each vector only has 2 bytes of data. Polygon test is done in realtime with each frame in zma and this is causing some buffer overruns. Because of this, the scoring system is switched to all or none where zma quits with status as motion detected as soon as it has satisfied the minimum score.

Code: Select all

git checkout sws-reboot-4b
The other branch is sws-reboot-4b-vmask. This is an attempt to simplify the work of zma by moving the polygon test into the initialization of zma. This should run faster than sws-reboot-4b. However, software motion vector is turned off because I haven't figured out how to implement a common routine for extracting hardware and software vectors using bit operations. Zma is now processing all the vectors in a frame and is able to provide a more meaningful score.

Code: Select all

git checkout sws-reboot-4b-vmask

Code: Select all

cp -rf distros/debian ./
* Download the web api

Code: Select all

git submodule update --init --recursive
* Edit /etc/apt/sources.list and uncomment the deb-src line

Code: Select all

sudo apt-get update
* Make changes to the source.
* Build the package.

Code: Select all

cd cmake
cmake ../
cd ..
debuild --prepend-path=/usr/lib/ccache -us -uc -b
Issues:

This is about 4000 commits behind zoneminder master so it will look and work differently than the most recent builds.

1. [Unable to send packet at frame 0: Resource temporarily unavailable, continuing] in zm logs. This is normal I think due to the mmal system taking a little while to actually start to accept and receive frames.

2. The jpeg encoding shows horizontal green lines on the Right side in 704x480. This is present in the official zoneminder version also. I think this is a libjpeg issue.

3. Frames jumping back in time or skipping. This might be due to an overburdened system where the capture and analysis components are being restarted by zoneminder. If you can avoid buffer overruns, this should be minimized.

4. Alignment trap might show up here and there.

5. I have done all this work on the arm natively and so probably won't compile on X86.

6. Don't do any resizing of video frames in zoneminder config. That option is not available here as I am using an earlier version of the fork that did not deal with component connections. The swscale replacement only does format conversion, not scaling.

7. I suggest low resolutions right now, preferrably x dimension as a multiple of 32 and y dimension as a multiple of 16 as native dimension of the ffmpeg camera (not resized dimension). At 704x480, I needed to reduce the work of zma quite a bit to avoid buffer overruns. Higher resolutions mean more macroblock info to process for zma.


Please try at your own risk. I am not responsible for wasted time, effort or hardware. I have only done very limited testing ( hand waving in front of camera ). It might crash in the morning or on rainy days or everyday.

If there is an expert here with regards to figuring out memory issues, I would appreciate some assistance with the code. Have Fun.

Thanks,
Chris

Post Reply

Who is online

Users browsing this forum: No registered users and 4 guests