Hardware acceleration(GPU) on Jetson Nano?

Forum for questions and support relating to 1.33.x development only.
Locked
tsopic
Posts: 1
Joined: Sat Nov 16, 2019 9:24 pm

Hardware acceleration(GPU) on Jetson Nano?

Post by tsopic »

Has anyone been able to successfully configure Jetson Nano to use GPU with Zoneminder?

So far I've tried compiling FFMPEG from this Github repo https://github.com/jocover/jetson-ffmpeg.
Not even sure if I did everything correctly.

If someone has been able to successfully use GPU acceleration with NANO, would appreciate settings info and steps to get it working.
Magic919
Posts: 1381
Joined: Wed Sep 18, 2013 6:56 am

Re: Hardware acceleration(GPU) on Jetson Nano?

Post by Magic919 »

I was doing a search for Zoneminder and Jetson, which led here. Did you progress it?

I think you’d want ffmpeg to use the hardware decoder and encoder rather than the obvious answer to use the GPU. It looks like there is support for that now.
-
tomcat84
Posts: 39
Joined: Fri Jul 03, 2020 11:24 pm

Re: Hardware acceleration(GPU) on Jetson Nano?

Post by tomcat84 »

I would be also interested in how zoneminder with ffmpeg can use the jetson hardware for acceleraded endcoding and decoding using ffmpeg.
Anyone of you have further informations about it?
Im thinking to get an Jetson Xavier NX or Jetson AGX Xavier for object recognission and other fun stuff but I would also use it for running zonminder on it with some 4K and Super HD Cameras. It would be a realy nice power saving option if it would work :)
[Edit]
Btw just as idea whats about to use https://developer.nvidia.com/deepstream-sdk as "man in the middle" for analyzing sub stream and trigger main stream alarm if detection happened.
SteveGilvarry
Posts: 494
Joined: Sun Jun 29, 2014 1:12 pm
Location: Melbourne, AU

Re: Hardware acceleration(GPU) on Jetson Nano?

Post by SteveGilvarry »

This could present issues to using Deepstream.
"DeepStream is a closed source SDK. Note that source for all reference applications and several plugins are available."
Production Zoneminder 1.37.x (Living dangerously)
Random Selection of Cameras (Dahua and Hikvision)
Magic919
Posts: 1381
Joined: Wed Sep 18, 2013 6:56 am

Re: Hardware acceleration(GPU) on Jetson Nano?

Post by Magic919 »

Jetson AGX Xavier and a suitable drive would be fine for running ZM. Add on the zmeventnotification server to do some object or face recognition and you'd be sorted. Performance mode is only about 30 watts. I don't know how many high resolution cameras it could manage. You'd have to compare with a decent HP G10 Microserver for comparable cost (maybe up to 7x the power consumption).
-
tomcat84
Posts: 39
Joined: Fri Jul 03, 2020 11:24 pm

Re: Hardware acceleration(GPU) on Jetson Nano?

Post by tomcat84 »

SteveGilvarry wrote: Sun Aug 09, 2020 7:00 am This could present issues to using Deepstream.
"DeepStream is a closed source SDK. Note that source for all reference applications and several plugins are available."
But it can communicate over mqtt and work with Apache Kafka. So there should be a way to get this running :lol:
for exaple using https://www.home-assistant.io/integrati ... che_kafka/ in the middle.
Magic919 wrote: Mon Aug 10, 2020 2:15 pm Jetson AGX Xavier and a suitable drive would be fine for running ZM. Add on the zmeventnotification server to do some object or face recognition and you'd be sorted. Performance mode is only about 30 watts. I don't know how many high resolution cameras it could manage. You'd have to compare with a decent HP G10 Microserver for comparable cost (maybe up to 7x the power consumption).
encode
4x 4K bei 60 (HEVC)
16x 1080p bei 60 (HEVC)
32x 1080p bei 30 (HEVC)

decode
2x 8K bei 30 (HEVC)
6x 4K bei 60 (HEVC)
26x 1080p bei 60 (HEVC)
72x 1080p bei 30 (HEVC)

If hardware acceleration would work it should be able to handle more than I need :lol:
The way using zmeventnotification server would be my first try for sure cause its not that complicated :oops:
Magic919
Posts: 1381
Joined: Wed Sep 18, 2013 6:56 am

Re: Hardware acceleration(GPU) on Jetson Nano?

Post by Magic919 »

Not sure the relevance of the HEVC stats.

Good luck with it and let us know how it works out.
-
tomcat84
Posts: 39
Joined: Fri Jul 03, 2020 11:24 pm

Re: Hardware acceleration(GPU) on Jetson Nano?

Post by tomcat84 »

I will :)
tomcat84
Posts: 39
Joined: Fri Jul 03, 2020 11:24 pm

Re: Hardware acceleration(GPU) on Jetson Nano?

Post by tomcat84 »

Short update.
Im running zoneminder with 6 4K and 1 WQHD Camera all on 10 fps with and H264. On Nodect or Recording (passtrough) it needs 40-50% CPU for each stream. With H264+ ~30% less. But then you have this problem: viewtopic.php?f=42&t=29889
H265 80-90% CPU. I was not able to get hardware acceleration for ffmpeg running so I have to deal with running @ 10fps on CPU. Not nice but works...

About object detecion:
Im able to object detect on livefeed with opencv and yolov4 in python (0,5)fps on 1080p stream with ~15-25% CPU and very low GPU load (gpu hardware acceleration is on). Detection is very good.
From here I could trigger zoneminder alarm on object detecion for example.

Now im testing deepstream. It has been mantioned its closed source. But there are a lot of examples of open source code. Also in python.
--> https://github.com/NVIDIA-AI-IOT/deepstream_python_apps
It's just little bit slower than the C code which is also there with open source examples.
Im able to perform object detection on 2 1080p streams with defaut resnet10 with ~10% CPU and very low GPU with an python example.
From here I could also trigger zoneminder very easy if person or vehicle detection happens based on the example code.

What I will say is there is enough open source code to put deepstream in front of zoneminder for realtime object detection and trigger recording based on detections. There are also examples to push detecions to an amqp broker or kafka for statistics and if you modify also for triggering zoneminder....
--> https://github.com/NVIDIA-AI-IOT/deepst ... ream-test4

And look at this crazy stuff: https://github.com/toolboc/Intelligent- ... soft-Azure

Im very new to python and so on so my programming skills are very low. But here is a huge potential. Even with the lower pricing models.
Even the nano should be able to do just object detecion on multiple sub streams and trigger zoneminder.

Some test with mlapi are shown over here: https://github.com/pliablepixels/mlapi/issues/18

If anyone is interested with more programming skills to get involved here he is wellcome.
saverio
Posts: 1
Joined: Wed Dec 30, 2020 2:31 pm

Re: Hardware acceleration(GPU) on Jetson Nano?

Post by saverio »

Hi,
I built the jetson-ffmpeg and I'm now able to use hw acceleration for encoding and decoding, but just at command line:
the following use works very well and during the conversion I can see no cpu load and NVENC ad NVDEC HW engines working on the jetson:

Code: Select all

ffmpeg -c:v h264_nvmpi -i in.mp4 -c:v hevc_nvmpi out.mp4
Then I set FFMPEG_INPUT_OPTIONS and FFMPEG_OUTPUT_OPTIONS in the OPTIONS|IMAGES panel:

Code: Select all

FFMPEG_INPUT_OPTIONS: -c:v h264_nvmpi
FFMPEG_OUTPUT_OPTIONS: -c hevc_nvmpi, -r 25
but nothing happen. It seems zm ignore these settings.
Any idea?
tomcat84
Posts: 39
Joined: Fri Jul 03, 2020 11:24 pm

Re: Hardware acceleration(GPU) on Jetson Nano?

Post by tomcat84 »

saverio wrote: Wed Dec 30, 2020 2:39 pm Hi,
I built the jetson-ffmpeg and I'm now able to use hw acceleration for encoding and decoding, but just at command line:
the following use works very well and during the conversion I can see no cpu load and NVENC ad NVDEC HW engines working on the jetson:

Code: Select all

ffmpeg -c:v h264_nvmpi -i in.mp4 -c:v hevc_nvmpi out.mp4
Then I set FFMPEG_INPUT_OPTIONS and FFMPEG_OUTPUT_OPTIONS in the OPTIONS|IMAGES panel:

Code: Select all

FFMPEG_INPUT_OPTIONS: -c:v h264_nvmpi
FFMPEG_OUTPUT_OPTIONS: -c hevc_nvmpi, -r 25
but nothing happen. It seems zm ignore these settings.
Any idea?
Thats where I ended too.

So short update:
For now I have 6 cameras 4k and 1 WQHD @ 10fps. CPU-Usage depends on the bitrate I set. Witht the Hikevision 4k cameras and 6144Kbps @H264 I get CPU load around 50-60%. Well not what I wanted to get but its working.
I used this https://github.com/jkjung-avt/tensorrt_demos as do to the object detection because it was the fastest way to get running. It uses the Hardwareacceleration to do object detection. I added some manual filtering options to reduce false positives.
With yolov4-608 I get ~16fps on one or 2,2fps on 7 cameras. That's enough to detect everything I want to :). If someone get into the camera vision it detects it in real time even if it can only see the feets. Thats very impressive.
The Cameras in Zoneminder are set to notect. If an object is detected it triggeres an event using the zoneminder api telling it what is detected for the discription and can inform me about it using Home Assistant. There are many ways to do this :).
There is only a Problem with the discriptions :viewtopic.php?f=42&t=30028&p=117913#p117913
Its running now for some weeks without any problems and is 99,999% reliable.
The false alarms are nearly 0. To controll it with a gui I connected it using mqtt to home assistant.

There are ways to optimise it and to reduce the GPU Load and power consumption but for this I need to switch from opencv&tensorrt to Deepstream.
I dont have the time for this and the project fits all my needs for the moment :mrgreen: It takes me less than 5 minutes to check all daily events. Most time I use zmNinja. Great app :)

I know there are cheaper ways to do such a project using eventnotificationserver and mlapi but not with realtime alarm based on the vision of 7 cameras :twisted:
Locked