Page 4 of 5

Re: Raspberry pi3 and h264_mmal

Posted: Thu Aug 03, 2017 2:26 am
by cmisip
I think the alignment trap errors are gone. I've made progress with hardware swscale replacement. The mmal splitter component can do format conversions so I am using that to convert to RGBA, RGB24, YUV420 (grayscale obtained by getting only the luma). The mmal resizer component only supports RGBA. The two will have to be chained to create the full swscale replacment. As it stands right now, the RPI3 is able to decode multiple streams in hardware without use of ffmpeg swscale ( but with no resizing yet). It turns out also that the RPI GPU can only support specific resolutions in multiples of 16. 640x360 is rounded to 640x368. This leads to directbuffer or mv_buffer being not adequately sized to receive the data if I use width and height in the cases when the mmal components rounds to the next multiple of 16. I noticed the problem with a green band on the lower part of the image in zms with that resolution. The problem went away when I switched to 704x480. I will have to make a change to account for that by using encoder values instead of user inputted width and height. It turns out that there is no jpeg turbo available in raspbian that is version 8. Needed to repackage the jpeg 62 turbo version with jpeg 8 ABI into files that satisfy zoneminder's dependency requirements.

Chris

Re: Raspberry pi3 and h264_mmal

Posted: Thu Aug 03, 2017 1:27 pm
by iconnor
Sounds like great work. I spent an annoying amount of time yesterday trying to get h264_qsv to work. In the end I don't think it ever will as it seems to require an active X display. Pplayed a bit more with h264_vaapi.

My concern at this point is code structure. Seems like each hwaccell codec is going to require a lot of extra code to support. Need to structure the code well. I will be taking a look at your code in the very near future.

Re: Raspberry pi3 and h264_mmal

Posted: Fri Aug 04, 2017 1:01 pm
by cmisip
This is my first time working on an open source project and first time using github and truthfully, I did not check the docs for code structure requirements. And I did make significant changes in a lot of the source code. Hopefully it is not too big of a mess to sort out. At least the mechanics should be evident. I did disable some of the neon code in zm_image.cpp. Didn't want to do that but I would not be able to build a working package otherwise. Neon code gets compiled in even if ZM_STRIP_NEON is defined resulting in a Panic. Those changes would have to be reversed once neon is up and running on raspbian.

I am also using this thread to post ideas and solutions to obstacles that I've encountered so that If the code ever gets merged in some form, it would help others to have some notes on implementation. MMAL development is kinda cryptic. There is not much formal documentation but there is a good team of engineers in the RPI forums trying to answer sensible questions.

I am currently trying to implement a pipleline :

Code: Select all

 resizer -> splitter output 0 -> directbuffer    
                splitter output 1 -> h264 encoder -> mvect_buffer.
I am finding out that resizer does not work with MMAL_ENCODING_OPAQUE format which means full data buffer copies need to happen between connected components in the pipeline. Likewise, the h264 encoder did not work with MMAL_ENCODING_OPAQUE. Wether as a consequence of being in a pipeline or it is just not designed to be used that way, I am not sure. Since full copy of buffers is required, which entailed a trip to ARM space instead of staying in the GPU and the cost of the copy itself, resizing would entail a performance hit. But since it is in series with the ffmpeg decoder that feeds it buffers (copy operation), the pipeline speed depends on the speed of the ffmpeg decoder as well. Long story short, its best to avoid resizing and have the camera just feed the appropriately sized video frame to zoneminder. A check needs to be made to see if the user supplied width and height matches the decode context width and height and that the video frame x and y resolution is a multiple of 16. And I think I already did mention earlier that the resizer only outputs RGBA or I420 format which is why the splitter connection was necessary as the splitter can do format conversion from I420 to RGBA and RGB24 (but can't resize). I am doing all this in a small test code so its not in the branch yet. Hope to get it done soon once (if) I get it working.

There are optimizations that could be done down the road such as :
1. Reducing the size of the video frame sent to the encoder in order to reduce the number of motion_vectors that need to be processed. This should greatly reduce the cpu utilization. Then have a full unresized frame sent to directbuffer for streaming. But that entails two resizers in the pipeline.
2. Use the hardware jpeg encoder. I don't know where to put it yet. Probably don't want to put it in zmc as that means jpeg buffers need to be mmaped as well increasing memory requirements. Plus, dont really need to jpeg encode all frames. Perhaps it could be in zms.
3. A native mmal h264 decoder can be implemented which would remove reliance on specially built ffmpeg.
4. The h264 encoder is already being used to output motion vectors, why not process the video frame data into an h264 stream.
5. Software and hardware motion vector processing require different buffer sizes. The hardware side requires a smaller mvect_buffer especially if resolution is downscaled prior to being sent to the encoder.
6. Only engage the zma mvect_buffer motion analysis if we think we have enough vectors to satisfy a minimum user specified score.
7. Second tier motion_vector validation by checking for minimum sized clusters. (performance hit).
8. Third tier motion_vector validation by checking if vectors in a cluster are oriented in the same direction (performance and memory hit as the magnitude of vectors need to be added to the motion_vector struct).

Bugs:
1. I notice that the frames in zms are sometimes out of order. Don't know how to fix that yet. Perhaps having a single pipeline with one entry point would fix the issue. Right now buffers are supplied to two entry points: encoder input and splitter input.
2. I had to make some changes to zmpkg.pl (not in the actual code branch) to get zoneminder to start

Code: Select all

#Fix zmpkg.pl error with zmsystemctl.pl
cd scripts
sed -i -- '/systemdRunning.*calledBysystem/,+3d'  zmpkg.pl.in
cd ..
3. The web UI stops working after a while ( at least 6 hours running ). Don't know if this is related to Bug 2 above. However, systemctl reports that it is still running and the processes are still alive and can be accessed with an app. I fudged this by always having daemoncheck return a 1 so the web ui thinks that zoneminder is running. The camera IP addresses are red but the Monitors are clickable and work fine.

Seems like the RPI is a very viable stand alone system for zoneminder. Kinda like zoneminder in a box once further optimizations are implemented. When RPI4 rolls out hopefully, all the issues have been ironed out.

I have just another week to work on this project because I am losing the cameras. They have to be shipped out. I'll do what I can. I can supply a package for people to test if I there is a place to host them and it doesn't violate any rules. However I wont be able to do any development for a while. I will also try to put up a howto on how to get it working. No guarantees.

Chris

Re: Raspberry pi3 and h264_mmal

Posted: Fri Aug 04, 2017 2:22 pm
by iconnor
Can you continue using a remote camera? Lots of online streams. Alternatively, we could probably ship you something or fund you. Another alternative would be to use a pre-recorded file as input.

I will try to work with mastertheknife to get the neon stuff fixed up.

In terms of a jpg encoder, it would be nice to have that in zms, although zma does a fair amount of jpg writing as well. (We do zone dump images, and writing of analysis images, etc).

The frames out of order in zms is interesting. I have seen that when running two copies of zmc simultaneously.

Re: Raspberry pi3 and h264_mmal

Posted: Sun Aug 06, 2017 1:43 am
by cmisip
I'll probably take a break for a few weeks. A lot of the work is already in the motion_vectors branch. It's not clean yet. But its got basic functionality. I suggest that you guys look at the code first and see if there is anything worth merging. I could always be "persuaded" to continue but I can't promise anything.

I really learn as I go. It turns out that there is a super secret efficient resizer in mmal that is undocumented in the headers and is present in the latest firmware. The vc.ril.isp can resize and convert to RGBA, RGB24 and GRAY8 (via I420). So there is no need for a splitter component. I also found out that mmal can handle any resolution and it does not need to be a multiple of 16. This is because VCOS_ALIGN_UP(width,32) and VCOS_ALIGN_UP(height,16) increases the buffer resolution to these multiples. The resulting converted buffer is bigger and the actual frame data in its original resolution is embedded in it with some padding. So the best way to handle hardware resizing is to add vcos_align_up in zm_monitor.cpp just before the creation of ffmpeg camera object. That way, the picture buffer allocated in zm_image will be appropriately sized for the inflated buffer. Otherwise, if the adjustment is done inside ffmpeg_camera object, zoneminder complains about the held buffer being too small and fails. This also ensures that the directbuffer is large enough to be memcpied to from mmal buffer. Otherwise, the software jpeg encoder gets confused and outputs garbage. Changing the resolution does mean that the zone dimensions need to be adjusted to fit the scaled down or scaled up frame. I haven't put that in yet. Somehow, that resolution change information needs to get to zma so it could scale the zone polygons. Or the zone polygons need to be automatically adjusted with each resolution change.

The alignment_trap shows up every now and then so it is not completely gone but I dont see it frequently. Somebody else who knows more about that stuff could look at the code. It happens only on some shutdowns and only on some pairs of zma's and zmc's.

I will see if I can get the parallel branch motion_vectors_pipeline up and running. This will utilize connections between components so the buffer is fed only in one entry point and multiple exit points can output specific buffer data types. This is what I was working on until I found out about vc.ril.isp. The connections will be simpler than I envisioned because of vc.ril.isp. I think I will enable the splitter component and connect its first output to the vc.ril.isp to resize/format_convert then save in directbuffer while splitter output 2 will connect to another vc.ril.isp component to downscale before sending the video to the encoder. That way, we have higher resolution frames in zms while zma is analyzing vectors from a lower resolution frame. As I mentioned earlier, there might be a performance penalty here because full data buffers are copied across components which is why I created a separate branch. I have to see if the splitter component can pass opaque buffers to vc.ril.isp. That would at least remove one full data buffer copy.

Regarding the out of sequence frames in zms, a pts value can be returned by mmal (if buffer->pts value is supplied to the mmal input side, perhaps using the frame number as a pts value ) and it could be saved along with directbuffer or a separate class member can save the value for each image. That might help with sorting frames when building the zms event stream.

Chris

Re: Raspberry pi3 and h264_mmal

Posted: Tue Aug 08, 2017 2:04 am
by iconnor
Ok, the thing is, if you are just going to drop this, then we will never merge. We have too many cases of merging code, that none of understand, and the author is gone.

Our job here is to encourage you to continue, and stick around. How can we do that? What do you need? By all means, take a break, but PLEASE stick around. This is exciting stuff.

Re: Raspberry pi3 and h264_mmal

Posted: Tue Aug 08, 2017 3:18 am
by cmisip
I deleted motion_vectors_pipeline but created motion_vectors_connect. That motion_vectors_connect branch uses mmal connections between components instead of two separate pipelines like in motion_vectors. The last update on motion_vectors was heavily commented for reference. motion_vectors_connect is a different design decision and I still am not sure performance wise which is better.

Motion_vectors implemented:

Code: Select all

   avcodec --> h264encoder -->mvect_buffer
    
   avcodec --> resizer -->directbuffer
With motion_vectors_connect. I implemented the following:

Code: Select all

    avcodec -->splitter -> resizere -> h264 encoder -> mvect_buffer
                |
                 -------> resizerd -> directbuffer


The resizere component downscales the video prior to motion vector extraction. The downscale is either 320x240, 640x480 or 960x720 which is user configurable. I used the Options field in the Source tab of Monitor to specify low, medium or high. I figured out how to rescale the zone polygons at the initiation of Load Monitor so the information about the size of the frame does not need to be passed to zma anymore via mmap. This also meant that the rescaling of the zones does not need to happen during the polygon inclusion test on zma which would have been a performance hit.

Things are working but I am still bugged by the out of sequence frames. Don't know what is causing that. There is only one buffer feeding the pipeline and buffers received from resizerd are checked to see if the pts value is not older than avcodec frameCount.

I found a clue though. When watching a live stream from the camera using zms, I noticed the time stamps on the upper right move back in time and jump forward again. Would you have an idea what could be causing that? It does not happen all the time. The event playback would also do the same thing seemingly to randomly jump back and forth.

I will be sticking around after taking a break. I am learning a lot doing this so I do have some benefit from it. I am just not sure of the value of this work eventually to your project as I don't know if it ever will be practical.

Chris

Re: Raspberry pi3 and h264_mmal

Posted: Tue Aug 08, 2017 11:15 am
by cmisip
Fingers crossed but I think I may have found the reason for frame sequence being messed up. I looked at an event that triggered this morning and it had frames from an event last night. I think maybe its due to the directbuffer being held. I set that to false and in the events that I checked so far, the problem seems to have gone away. I will keep the system running until the end of the day to see if this does truly fix the problem. I wonder what is causing this though. I would have expected the sequence of image_buffer objects will always have their directbuffers overwritten with new buffer data from mmal. I guess some image_buffer objects are not being overwritten. I think maybe, I need to zero out the held buffers at least to keep from having to reallocate but I don't know how the event streamer will handle that. Will it simply skip the image_buffer objects with zeroed out directbuffers?

Chris

Re: Raspberry pi3 and h264_mmal

Posted: Tue Aug 08, 2017 4:42 pm
by cmisip
I think that is it. I'll see if I can get it to work with holding directbuffer but zeroing it out with memset. Finally, it is becoming more useable and practical.

Chris

Re: Raspberry pi3 and h264_mmal

Posted: Tue Aug 08, 2017 11:07 pm
by cmisip
Drats. Thats not it.

So I disabled the use of vc.ril.isp for now and restored the use of swscale for hardware side. Motion detection does work for both software and hardware decode. The dowscale part before the hardware encoder works, as well as the use of the Options field in the Source tab for the downscale mode (even though it complains about invalid options when the options are checked against valid ffmpeg options. The only part that does not work is the copying of the buffer data to directbuffer from vc.ril.isp. Somehow it messes up the event streamer with the timestamp going backwards and jumping forwards. This doesn't happen with swscale so the problem is probably on the mmal side. This is probably my last attempt for a little while. I hope I didn't break anything else on the last update to motion_vectors_connect.

Thanks,
Chris

Re: Raspberry pi3 and h264_mmal

Posted: Fri Oct 13, 2017 10:14 am
by iconnor
I'm going to be looking at merging your work. Mostly for use with an nvidia tegra board.

Re: Raspberry pi3 and h264_mmal

Posted: Thu Feb 22, 2018 5:28 pm
by iconnor
I have finally added the few lines required to use h264_mmal for h264 decoding. It seems to work quite well.

Re: Raspberry pi3 and h264_mmal

Posted: Fri Feb 23, 2018 1:03 am
by snake
I'm going to be testing this out. I'll write up some how to on it, if I can get it working.

Re: Raspberry pi3 and h264_mmal

Posted: Wed Jul 11, 2018 10:51 pm
by cmisip
Here is a howto for building this. It's an earlier version of the fork which does not use mmal component connections. I thought I'd work on a simpler version. Before you invest time or effort or hardware in this, understand that it may not work. Do so at your own risk. :)

1. Install Raspbian Lite Stretch. Update the system

Code: Select all

sudo apt-get update
sudo apt-get dist-upgrade
2. Install Zoneminder and its dependencies including ffmpeg

Code: Select all

sudo apt-get install zoneminder
*ffmpeg is installed as a dependency and it is already compiled with mmal support.

3. Allow user pi and www-data to access the video encoder

Code: Select all

sudo usermod -a -G video pi
sudo usermod -a -G video www-data
4.Install other dependencies of Zoneminder

Code: Select all

sudo apt-get install libmp4v2-2 
sudo apt-get install libapache2-mod-fcgid
sudo apt-get install libexpect-perl
sudo apt-get install netpbm 
sudo apt-get install php-gd
sudo apt-get install libssl-dev
*You probably ended up with libjpeg62-turbo and jpeg8 as a dependency. Uninstall zoneminder and uninstall libjpeg8.

Code: Select all

sudo apt-get remove zoneminder
sudo apt-get remove libjpeg8 

*Build jpegturbo with jpeg8 support

Code: Select all

sudo apt-get build dep libjpeg62-turbo
*edit debian/rules and add --with-jpeg8 to the following line

Code: Select all

   dh_auto_configure -- --with-build-date=$(DEB_VERSION) $(DISABLE_SIMD) --with-jpeg8
*edit debian/libjpeg62-turbo.install and change the entry to:

Code: Select all

   usr/lib/arm-linux-gnueabihf/libjpeg.so.8*
Build libturbo jpeg

Code: Select all

debuild -us -uc -b

*Install the packages.

Code: Select all

sudo dpkg -i ./libjpeg-dev_1.5.1-2_all.deb
sudo dpkg -i ./libjpeg62-turbo-dev_1.5.1-2_armhf.deb
sudo dpkg -i ./libturbojpeg0-dev_1.5.1-2_armhf.deb
sudo dpkg -i ./libjpeg62-turbo_1.5.1-2_armhf.deb
sudo dpkg -i ./libturbojpeg0_1.5.1-2_armhf.deb
sudo dpkg -i ./libjpeg-turbo-progs_1.5.1-2_armhf.deb
*Just remember not to let a system update overwrite these packages with the official ones with jpeg62 support.


5.Build and install the libraries for RPI userland

Code: Select all

sudo apt-get install git
sudo apt-get install cmake

mkdir raspberrypi
cd raspberrypi
git clone https://github.com/raspberrypi/userland.git
cd userland
./buildme
6. Zoneminder looks for a systemd service when started, so create it here

Code: Select all

sudo  nano /lib/systemd/system/zoneminder.service
*Paste the following

Code: Select all


# ZoneMinder systemd unit file
# This file is intended to work with all Linux distributions
[Unit]
Description=ZoneMinder CCTV recording and security system
After=network.target mysql.service apache2.service
Requires=mysql.service apache2.service
[Service]
User=www-data
Type=forking
ExecStart=/usr/bin/zmpkg.pl start
ExecReload=/usr/bin/zmpkg.pl restart
ExecStop=/usr/bin/zmpkg.pl stop
PIDFile=/var/run/zm/zm.pid
[Install]
WantedBy=multi-user.target
7. Create the tmpfiles directory with systemd

*Create a file called zoneminder.conf

Code: Select all

sudo nano /etc/tmpfiles.d/zoneminder.conf
*Paste or enter the following to above file

Code: Select all

d /var/run/zm 0755 www-data www-data
d /var/tmp/zm 0755 www-data www-data
*Then save the file.

*Change permissions on the file

Code: Select all

sudo chmod 755 /etc/tmpfiles.d/zoneminder.conf
* Let systemd process the file

Code: Select all

sudo /bin/systemd-tmpfiles --create /etc/tmpfiles.d/zoneminder.conf
* Enable the zoneminder systemd service

Code: Select all

sudo systemctl enable zoneminder.service
8. Make the changes to apache2

* Enable some modules

Code: Select all

sudo a2enmod cgi
sudo a2enmod rewrite
sudo systemctl reload apache2
*If zoneminder.conf is not enabled by now in apache2, enable it and reload apache.
*Verify that zoneminder.conf is in /etc/apache2/conf-available, then

Code: Select all

sudo a2enconf zoneminder
sudo systemctl reload apache2
9. Adust time/date in php.ini

Code: Select all

sudo nano /etc/php/7.0/apache2/php.ini
sudo chown -R www-data:www-data /usr/share/zoneminder/
10. The database zm should have been created by now. If not create with:

Code: Select all

mysql < /usr/share/zoneminder/db/zm_create.sql 
mysql -e "grant select,insert,update,delete,create on zm.* to 'zmuser'@localhost identified by 'zmpass';"
*Modify the tables

Code: Select all

sudo mysql -uroot
use zm;

ALTER TABLE Monitors modify column Type enum('Local','Remote','File','Ffmpeg','Ffmpeghw','Libvlc','cURL') NOT NULL default 'Local';
ALTER TABLE MonitorPresets modify column Type enum('Local','Remote','File','Ffmpeg','Ffmpeghw','Libvlc','cURL') NOT NULL default 'Local';
ALTER TABLE Controls modify column Type enum('Local','Remote','Ffmpeg','Ffmpeghw','Libvlc','cURL') NOT NULL default 'Local';
alter table Monitors modify column Function enum('None','Monitor','Modect','Mvdect','Record','Mocord','Nodect') NOT NULL default 'Monitor';


11. Install the modified zoneminder.deb. Hopefully postinst completes with no issue.

Code: Select all

sudo dpkg -i ./zoneminder.deb
*If the postinst script gets stuck and asks to do a "sudo dpkg --configure -a" which just loops the postinst script, then

Code: Select all

sudo dpkg --remove --force-remove-reinstreq zoneminder
*ZM complains of unreadable zm.conf

Code: Select all

sudo chmod 740 /etc/zm/zm.conf
sudo chown root:www-data /etc/zm/zm.conf

*Can't access the webserver if UFW is in use

Code: Select all

sudo ufw allow http
*ZM complains of not having write access to events and images directories

Code: Select all

sudo chown -R www-data:www-data /usr/share/zoneminder/
*The events, images and temp directories are symlinks to /var/cache/zoneminder

Code: Select all

sudo chown -R www-data:www-data /var/cache/zoneminder/
*Cannot validate swap, disabling buffered playback,
Edit /etc/zm/zm.conf and add

Code: Select all

ZM_PATH_SWAP=/tmp
* After all the above probably best to

Code: Select all

sudo systemctl reload apache2
sudo service zoneminder restart
12. Access the Web UI and

* Add New Monitor

Code: Select all

Select Ffmpeg and Mvdect for software h264 decoding via ffmpeg with motion vector extraction.
Select Ffmpeghw and Mvdect for hardware h264 decoding via ffmpeg using mmal and motion vector extraction via hardware mmal h264 encoder.
Hardware Scaling is used when Source=Ffmpeghw Function=Mvdect
Software Scaling is used when source=Ffmpeg Function=Modect|Mvdect
Software Scaling is used when Source=Ffmpeghw Function=Modect

*For an sv3c camera, the settings are:

Code: Select all

Source path = rtsp://mysv3c.chris.net/12
Remote Method = TCP
Capture width = 640
Capture height = 360


Source path = rtsp://mysv3c.chris.net/11
Remote Method = TCP
Capture width = 1280
Capture height = 720
1920x1080 is possible with MVDECT.

***I could only test with one camera but in theory, it should be able to run multiple cameras. Just keep an eye on memory consumption. The kernel uses 25% of the 1GB RAM already, and with 256 MB set aside for GPU MEM, that leaves ~500 MB for all the cameras, mysql and any other running processes.

My current system is a PI3B with 16GB SDCARD. I don't have a hard disk attached so the jpegwriter, mp4writer and mysql are writing to the SD card.


* Define the Zone settings for this camera.

Code: Select all

Click the Zones associated with the Monitor and Select Alarm Check Method as "AlarmPixels"
Put reasonable value for Min Alarmed Area. This value is computed as percentage of polygon area that needs to be covered
by macroblocks to label the frame as motion detected. Its an all or none score.


***The logic of the motion vector score calculation:
The Min Alarmed Area is taken as a percentage value of the polygon total area. Each vector is weighted as having equivalent of 16x16 pixels and each vector is given 4x weight, meaning a 25% pixel coverage is considered 100% coverage. The full frame will be tested so should give meaningful scores. The alarm centre is not being calculated at this time.

My Settings:

Code: Select all

Camera is set at 15 fps on its web UI. //Lower this value to solve problems with buffer overruns. 
Source Type = Ffmpeghw
Function =  Mvdect
Enabled = checked
Reference Image Blend %ge = No Blending
Alarm Reference Image Blend %ge = No blending (Alarm lasts forever)


Source path = rtsp://mysv3c.chris.net/12
Remote Method = TCP
Target Colorspace = 24 bit color
Capture width = 640
Capture height = 360

Image Buffer size = 20 // Since I started using an external USB drive for storing events, this number has seemed adequate 
Warm Up frames = 0
Pre Event Image Count = 1 //All JPEGs are created in the capture process. 
Post Event Image Count =60  //increase to have longer events instead of many short ones
Stream Replay Image Buffer = 200
Alarm Frame Count =1  //1 alarm frame is alert only, 2 alarm frames mean record the event as an alarm. 

Storage Tab:
Save JPEGs: Frames only.
Video Writer: H264 Camera Passthrough

Zoneminder Config
WATCH_MAX_DELAY = 10
WATCH_CHECK_INTERVAL = 20
OPT_ADAPTIVE_SKIP = checked
CREATE_ANALYSIS_FRAMES = unchecked

13. Optimizations

* Select high bitrate rtsp stream. I think this reduces encoding artifacts that might be seen by the mmal encoder as motion.

* No need to blend the image because we don't use pixel analysis. This greatly reduces zma's cpu utilization.

Code: Select all

Set Reference Image Bled %ge to No Blending.
Set Alarm Reference Image Blend %ge to No blending. 
*Reduce the framerate of your rtsp camera. This will be done on the camera side web UI.

*Edit Monitor Options and under Buffers (Image Buffer Size) increase to a higher number

*Uncheck Create_Analsyis_Images

*Adjust WATCH_MAX_DELAY if it keeps restarting zma.

*Set Warm-Up Frames to 0 since we dont need to generate a reference image.

*Use a fast hard disk to ensure jpeg writing and mysql writes don't delay zma and video storage writes don't delay zmc.

*Disable any debugging or logging options not needed in the Logging tab of Options. .


Other Optimizations (dont't know if relevant):

?In Options config OPT_ADAPTIVE_SKIP needs to be checked

?Set Analysis Update Delay

<From Google> Currently, video analysis is performed at full framerate, this can lead to high cpu load.
In order to reduce the cpu load, this PR add an analysis interval parameter to monitors settings.
Default value is 1, all images are processed.
If a greater value is set, for example 5, only images with count multiple of 5 will be processed.

The reason for this is to make it possible to reduce CPU-load if split-second motion detection is not required.

?Set Motion Frame Skip

<From Google> There is a new setting "Motion Frame Skip" under the "Misc" tab in the Monitor settings where you configure the value.
The default value is 0, so the feature is not active by default.
The feature works like Frame Skip. That is, if the value is "5" every 6th image will be analyzed for motion. The reference image is updated every frame to not change the effect of current blend settings.





14. Building the zoneminder deb

*Install the necessary packages for building

Code: Select all

 sudo apt-get install ccache
 sudo apt-get install build-essential fakeroot devscripts
*Install the dependencies for building

Code: Select all

sudo apt-get install libnetpbm10-dev 
sudo apt-get install libvlccore-dev
sudo apt-get install libgcrypt11-dev
sudo apt-get install libssl-dev

sudo apt-get build-dep zoneminder
*Clone the repo

Code: Select all

git clone https://github.com/cmisip/ZoneMinder.git
cd ZoneMinder
Latest work is now merged in the fork's master branch. It uses mmal decoder natively now so no need for ffmpeg's h264_mmal decoder. The motion vector code is now integrated into the video capture module. I use passthrough mode. Some of the analysis processing has been moved to the capture side to reduce zma's latency. The capture process zmc is now computing alarm_frames and sending the value to zma. Zmc is preemptively encoding jpegs using the mmal jpeg encoder and sending them to zma via shared mem when zmc thinks an alarm will be triggered in zma. To make 1920x1080 rtsp stream useable using 150 size ringbuffer ( to prevent buffer overruns) , an intermediate downscale step is introduced. The steps are: Rtsp packets are decoded by MMAL_COMPONENT_DEFAULT_VIDEO_DECODER at full resolution. vc.ril.isp, downscales to 640x360 and creates the RGB buffer. MMAL_COMPONENT_DEFAULT_VIDEO_ENCODER processes the downscaled video for motion vectors. MMAL_COMPONENT_DEFAULT_IMAGE_ENCODER converts the downscaled frames into jpegs. The video capture module will save the original decoded packets at full resolution into an mp4 file.

Due to the changes above, there are some UI issues that have not been fixed yet. The zones page on the web UI displays the downscaled frame but the zone polygon is displayed at full resolution. Setting the video resolution to 640x360 fixes the UI issue but adjusting the zone polygon may cause a segfault. I think the web interface is communicating with zoneminder via database and so does not know of the adjustments to resolution and polygon coordinates. Text annotation on jpeg does not work on the capture side jpegs. The jpeg display in the Event window now includes pixel arrows to show the direction and clustering of motion vectors which would be useful in developing a scoring system. It is turned on in the code so performance will suffer a little.

Code: Select all

git checkout master

Code: Select all

cp -rf distros/debian ./
* Download the web api

Code: Select all

git submodule update --init --recursive
* Edit /etc/apt/sources.list and uncomment the deb-src line

Code: Select all

sudo apt-get update
* Make changes to the source.
* Build the package.

Code: Select all

cd cmake
cmake ../
cd ..
debuild --prepend-path=/usr/lib/ccache -us -uc -b
Issues:

This is about 4000 commits behind zoneminder master so it will look and work differently than the most recent builds.

1. Frames jumping back in time or skipping. This might be due to an overburdened system where the capture and analysis components are being restarted by zoneminder. If you can avoid buffer overruns, this should be minimized. This doesn't matter if you use the video capture module as the event display will playback the mp4 at full resolution.

2. Alignment trap might show up here and there.

3. I have done all this work on the arm natively and so probably won't compile on X86.

4. Software motion vector code has been removed for now.

5. The resolution you put in the monitor setup screen may not matter because the video resolution is autodetected by avcodec and it will be downscaled to 640x360 anyway internally. The recorded jpegs will be at 640x360 but the recorded mp4 video will be in full resolution.

Here is the speed test with one rtsp stream at 640x360 with buffer visualization turned on at 15 fps source.

Code: Select all


PIDSTAT zmc : Reporting average of 12 readings at 5 seconds intervals

ZMC Process ID : 27301 ==> ( 15.40 + 14.60 + 14.00 + 14.20 + 13.40 + 12.80 + 13.20 + 13.20 + 12.20 + 13.00 + 12.80 + 13.60 ) / 12 
AVERAGE : 13.53

PIDSTAT zma : Reporting average of 12 readings at 5 seconds intervals

ZMA Process ID : 27305 ==> ( 5.40 + 1.40 + 0.20 + 0.00 + 1.40 + 2.80 + 4.20 + 1.60 + 2.40 + 0.40 + 0.00 + 3.20 ) / 12 
AVERAGE : 1.91


And the output of top:

Code: Select all

Tasks: 128 total,   2 running,  82 sleeping,   0 stopped,   0 zombie
%Cpu(s):  3.3 us,  4.2 sy,  0.0 ni, 80.8 id, 11.6 wa,  0.0 hi,  0.1 si,  0.0 st
KiB Mem :   604356 total,    41972 free,   155468 used,   406916 buff/cache
KiB Swap:   102396 total,    52220 free,    50176 used.   345532 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                  
27301 www-data  20   0  285124  51852  43556 R  13.8  8.6   0:28.61 zmc                                                                                      
27305 www-data  20   0  237468  45552  38584 S   3.0  7.5   0:04.64 zma                                                                                      
27372 pi        20   0    8448   3084   2560 R   1.6  0.5   0:00.20 top                                                                                      
  754 mysql     20   0  639116  43456   9096 S   1.3  7.2   2:15.32 mysqld       
     
Please try at your own risk. I am not responsible for wasted time, effort or hardware. I have only done very limited testing. It might crash in the morning or on rainy days or everyday.

If there is an expert here with regards to figuring out memory issues, I would appreciate some assistance with the code. Have Fun.

Thanks,
Chris

Re: Raspberry pi3 and h264_mmal

Posted: Thu Aug 16, 2018 3:25 am
by snake
Thanks for posting the guide. With that I was brave enough to give a try at installing this myself. Though I used 1.31.44, I was successful. It looks like hwaccel will be in the next ZM release.

Code: Select all

https://notabug.org/monitor/ZM_rpi3_HWACCEL_Testing
My full install steps and testing notes are here, in case anyone else wants to try this on an rpi before the next release, but here's a rundown of what I found.
  • ZM 1.31.44 has HWACCEL if libavcodec-dev includes mmal or other hwaccel (not used in a binary ffmpeg or anything of that sort, though binary ffmpeg can use mmal for encoding afterwards)
  • HWACCEL can be verified to be working by setting debugging on zmc_m#
  • The HWACCEL calls are in zm_ffmpeg_camera.cpp see code for the debug string you are looking for
  • Only SD resolution worked for me. I was unable to get 1280x720. 640x480 works, and works as well as MJPEG in terms of CPU.

I'm sure there is a limit to how many cameras, and what FPS this can do, so I'll be doing some more testing and will update the repo. Overall, I don't think the RPI is a strong ARM board (HD is not working for me). It's likely what was posted here is betterviewtopic.php?f=36&t=27020, but the RPI has a niche use, for some number of SD cameras, and they are common and cheap.

I might try adding an RPI to my existing server setup as an accessory server. Overall, I've found ARM boards to be untrustworthy when it comes to real processing grunt, and the RPI follows this trend, but I can't speak for the ODroid boards linked above.