ES 6.1.5 (object detection not working)

Forum for questions and support relating to the 1.34.x releases only.
repomanz
Posts: 8
Joined: Wed Jan 06, 2021 4:11 pm

ES 6.1.5 (object detection not working)

Post by repomanz »

Hello - I updated to ES 6.1.5 and seem to be having some struggle with object detection. This was working fine previous to 6.1.5 and I understand the structure of objectconfig.ini has changes but believe I've made correct/updated changes leveraging ml_sequence.

For object detection; using: object, yolo4 model.

I had this working before ES6.1.5 across 3 monitors. The monitors are picking up events / recording but object detection seems to not work yet. I assume it's something with the new objectconfig.ini. Does anyone see something obvious I've goofed here?

Code: Select all

~                                                                                                                                       
# Configuration file for object detection

# NOTE: ALL parameters here can be overriden
# on a per monitor basis if you want. Just
# duplicate it inside the correct [monitor-<num>] section

# You can create your own custom attributes in the [custom] section

[general]

# Please don't change this. It is used by the config upgrade script
version=1.2

# You can now limit the # of detection process
# per target processor. If not specified, default is 1
# Other detection processes will wait to acquire lock

cpu_max_processes=3
tpu_max_processes=1
gpu_max_processes=1

# Time to wait in seconds per processor to be free, before
# erroring out. Default is 120 (2 mins)
cpu_max_lock_wait=100
tpu_max_lock_wait=100
gpu_max_lock_wait=100


#pyzm_overrides={'conf_path':'/etc/zm','log_level_debug':0}
pyzm_overrides={'log_level_debug':5}

# This is an optional file
# If specified, you can specify tokens with secret values in that file
# and onlt refer to the tokens in your main config file
secrets = /etc/zm/secrets.ini

# portal/user/password are needed if you plan on using ZM's legacy
# auth mechanism to get images
portal=!ZM_PORTAL
user=!ZM_USER
password=!ZM_PASSWORD

# api portal is needed if you plan to use tokens to get images
# requires ZM 1.33 or above
api_portal=!ZM_API_PORTAL

allow_self_signed=yes
# if yes, last detection will be stored for monitors
# and bounding boxes that match, along with labels
# will be discarded for new detections. This may be helpful
# in getting rid of static objects that get detected
# due to some motion.
match_past_detections=no
# The max difference in area between the objects if match_past_detection is on
# can also be specified in px like 300px. Default is 5%. Basically, bounding boxes of the same
# object can slightly differ ever so slightly between detection. Contributor u/neillbell put in this PR
# to calculate the difference in areas and based on his tests, 5% worked well. YMMV. Change it if needed.
past_det_max_diff_area=5%

max_detection_size=90%

# sequence of models to run for detection
detection_sequence=object,face,alpr
# if all, then we will loop through all models
# if first then the first success will break out
detection_mode=object

# If you need basic auth to access ZM
#basic_user=user
#basic_password=password

# base data path for various files the ES+OD needs
# we support in config variable substitution as well
base_data_path=/var/lib/zmeventnotification

# global settings for
# bestmatch, alarm, snapshot OR a specific frame ID
frame_id=bestmatch

# this is the to resize the image before analysis is done
resize=800
# set to yes, if you want to remove images after analysis
# setting to yes is recommended to avoid filling up space
# keep to no while debugging/inspecting masks
# Note this does NOT delete debug images later
delete_after_analyze=yes

# If yes, will write an image called <filename>-bbox.jpg as well
# which contains the bounding boxes. This has NO relation to
# write_image_to_zm
# Typically, if you enable delete_after_analyze you may
# also want to set  write_debug_image to no.
write_debug_image=no

# if yes, will write an image with bounding boxes
# this needs to be yes to be able to write a bounding box
# image to ZoneMinder that is visible from its console
write_image_to_zm=yes

# Adds percentage to detections
# hog/face shows 100% always
show_percent=yes

# color to be used to draw the polygons you specified
poly_color=(255,255,255)
poly_thickness=2
#import_zm_zones=yes
only_triggered_zm_zones=no

# This section gives you an option to get brief animations
# of the event, delivered as part of the push notification to mobile devices
# Animations are created only if an object is detected
#
# NOTE: This will DELAY the time taken to send you push notifications
# It will try to first creat the animation, which may take upto a minute
# depending on how soon it gets access to frames. See notes below

[animation]

# If yes, object detection will attempt to create
# a short GIF file around the object detection frame
# that can be sent via push notifications for instant playback
# Note this required additional software support. Default:no
create_animation=no

# Format of animation burst
# valid options are "mp4", "gif", "mp4,gif"
# Note that gifs will be of a shorter duration
# as they take up much more disk space than mp4
animation_types='mp4,gif'

# default width of animation image. Be cautious when you increase this
# most mobile platforms give a very brief amount of time (in seconds)
# to download the image.
# Given your ZM instance will be serving the image, it will anyway be slow
# Making the total animation size bigger resulted in the notification not
# getting an image at all (timed out)
animation_width=640

# When an event is detected, ZM it writes frames a little late
# On top of that, it looks like with caching enabled, the API layer doesn't
# get access to DB records for much longer (around 30 seconds), at least on my
# system. animation_retry_sleep refers to how long to wait before trying to grab
# frame information if it failed. animation_max_tries defines how many times it
# will try and retrieve frames before it gives up
animation_retry_sleep=15
animation_max_tries=4

# if animation_types is gif then when can generate a fast preview gif
# every second frame is skipped and the frame rate doubled
# to give quick preview, Default (no)
fast_gif=no

[remote]
# You can now run the machine learning code on a different server
# This frees up your ZM server for other things
# To do this, you need to setup https://github.com/pliablepixels/mlapi
# on your desired server and confiure it with a user. See its instructions
# once set up, you can choose to do object/face recognition via that
# external serer

# URL that will be used
#ml_gateway=http://192.168.1.183:5000/api/v1
#ml_gateway=http://10.6.1.13:5000/api/v1
#ml_gateway=http://192.168.1.21:5000/api/v1
#ml_gateway=http://10.9.0.2:5000/api/v1
#ml_fallback_local=yes
# API/password for remote gateway
ml_user=!ML_USER
ml_password=!ML_PASSWORD


# config for object
[object]

# If you are using legacy format (use_sequence=no) then these parameters will
# be used during ML inferencing
object_detection_pattern=(person|car|motorbike|bus|truck|boat)
object_min_confidence=0.3
object_framework=coral_edgetpu
object_processor=tpu
object_weights={{base_data_path}}/models/coral_edgetpu/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
object_labels={{base_data_path}}/models/coral_edgetpu/coco_indexed.names

# If you are using the new ml_sequence format (use_sequence=yes) then
# you can fiddle with these parameters and look at ml_sequence later
# Note that these can be named anything. You can add custom variables, ad-infinitum
# Google Coral
tpu_object_weights={{base_data_path}}/models/coral_edgetpu/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
tpu_object_labels={{base_data_path}}/models/coral_edgetpu/coco_indexed.names
tpu_object_framework=coral_edgetpu
tpu_object_processor=tpu
tpu_min_confidence=0.6

# Yolo v4 on GPU (falls back to CPU if no GPU)
yolo4_object_weights={{base_data_path}}/models/yolov4/yolov4.weights
yolo4_object_labels={{base_data_path}}/models/yolov4/coco.names
yolo4_object_config={{base_data_path}}/models/yolov4/yolov4.cfg
yolo4_object_framework=opencv
yolo4_object_processor=gpu

# Yolo v3 on GPU (falls back to CPU if no GPU)
yolo3_object_weights={{base_data_path}}/models/yolov3/yolov3.weights
yolo3_object_labels={{base_data_path}}/models/yolov3/coco.names
yolo3_object_config={{base_data_path}}/models/yolov3/yolov3.cfg
yolo3_object_framework=opencv
yolo3_object_processor=gpu

# Tiny Yolo V4 on GPU (falls back to CPU if no GPU)
tinyyolo_object_config={{base_data_path}}/models/tinyyolov4/yolov4-tiny.cfg
tinyyolo_object_weights={{base_data_path}}/models/tinyyolov4/yolov4-tiny.weights
tinyyolo_object_labels={{base_data_path}}/models/tinyyolov4/coco.names
tinyyolo_object_framework=opencv
tinyyolo_object_processor=gpu


[face]
face_detection_pattern=.*
known_images_path={{base_data_path}}/known_faces
unknown_images_path={{base_data_path}}/unknown_faces
save_unknown_faces=yes
save_unknown_faces_leeway_pixels=100
face_detection_framework=dlib

# read https://github.com/ageitgey/face_recognition/wiki/Face-Recognition-Accuracy-Problems
# read https://github.com/ageitgey/face_recognition#automatically-find-all-the-faces-in-an-image
# and play around

# quick overview:
# num_jitters is how many times to distort images
# upsample_times is how many times to upsample input images (for small faces, for example)
# model can be hog or cnn. cnn may be more accurate, but I haven't found it to be
face_num_jitters=1
face_model=cnn
face_upsample_times=1

# This is maximum distance of the face under test to the closest matched
# face cluster. The larger this distance, larger the chances of misclassification.
#
face_recog_dist_threshold=0.6
# When we are first training the face recognition model with known faces,
# by default we use hog because we assume you will supply well lit, front facing faces
# However, if you are planning to train with profile photos or hard to see faces, you
# may want to change this to cnn. Note that this increases training time, but training only
# happens once, unless you retrain again by removing the training model
face_train_model=cnn
#if a face doesn't match known names, we will detect it as 'unknown face'
# you can change that to something that suits your personality better ;-)
#unknown_face_name=invader

[alpr]
alpr_detection_pattern=.*
alpr_use_after_detection_only=yes
# Many of the ALPR providers offer both a cloud version
# and local SDK version. Sometimes local SDK format differs from
# the cloud instance. Set this to local or cloud. Default cloud
alpr_api_type=cloud

# -----| If you are using plate recognizer | ------
alpr_service=plate_recognizer
#alpr_service=open_alpr_cmdline

# If you want to host a local SDK https://app.platerecognizer.com/sdk/
#alpr_url=http://192.168.1.21:8080/alpr
# Plate recog replace with your api key
alpr_key=!PLATEREC_ALPR_KEY
# if yes, then it will log usage statistics of the ALPR service
platerec_stats=yes
# If you want to specify regions. See http://docs.platerecognizer.com/#regions-supported
#platerec_regions=['us','cn','kr']
# minimal confidence for actually detecting a plate
platerec_min_dscore=0.1
# minimal confidence for the translated text
platerec_min_score=0.2


# ----| If you are using openALPR |-----
#alpr_service=open_alpr
#alpr_key=!OPENALPR_ALPR_KEY

# For an explanation of params, see http://doc.openalpr.com/api/?api=cloudapi
#openalpr_recognize_vehicle=1
#openalpr_country=us
#openalpr_state=ca
# openalpr returns percents, but we convert to between 0 and 1
#openalpr_min_confidence=0.3

# ----| If you are using openALPR command line |-----

openalpr_cmdline_binary=alpr

# Do an alpr -help to see options, plug them in here
# like say '-j -p ca -c US' etc.
# keep the -j because its JSON

# Note that alpr_pattern is honored
# For the rest, just stuff them in the cmd line options

openalpr_cmdline_params=-j -d
openalpr_cmdline_min_confidence=0.3


## Monitor specific settings


# Examples:
# Let's assume your monitor ID is 999
[monitor-999]
# my driveway
match_past_detections=no
wait=5
object_detection_pattern=(person)

# Advanced example - here we want anything except potted plant
# exclusion in regular expressions is not
# as straightforward as you may think, so
# follow this pattern
# object_detection_pattern = ^(?!object1|object2|objectN)
# the characters in front implement what is
# called a negative look ahead

# object_detection_pattern=^(?!potted plant|pottedplant|bench|broccoli)
#alpr_detection_pattern=^(.*x11)
#delete_after_analyze=no
#detection_pattern=.*
#import_zm_zones=yes

# polygon areas where object detection will be done.
# You can name them anything except the keywords defined in the optional
# params below. You can put as many polygons as you want per [monitor-<mid>]
# (see examples).

my_driveway=306,356 1003,341 1074,683 154,715

# You are now allowed to specify detection pattern per zone
# the format is <polygonname>_zone_detection_pattern=<regexp>
# So if your polygon is called my_driveway, its associated
# detection pattern will be my_driveway_zone_detection_pattern
# If none is specified, the value in object_detection_pattern
# will be used
# This also applies to ZM zones. Let's assume you have
# import_zm_zones=yes and let's suppose you have a zone in ZM
# called Front_Door. In that case, all you need to do is put in a
# front_door_zone_detection_pattern=(person|car) here
#
# NOTE: ZM Zones are converted to lowercase, and spaces are replaced
# with underscores@3

my_driveway_zone_detection_pattern=(person)
some_other_area=0,0 200,300 700,900
# use license plate recognition for my driveway
# see alpr section later for more data needed
resize=no
detection_sequence=object,alpr


[ml]
# When enabled, you can specify complex ML inferencing logic in ml_sequence
# Anything specified in ml_sequence will override any other ml attributes

# Also, when enabled, stream_sequence will override any other frame related
# attributes
use_sequence = yes

# if enabled, will not grab exclusive locks before running inferencing
# locking seems to cause issues on some unique file systems
disable_locks= no

# Chain of frames
# See https://zmeventnotification.readthedocs.io/en/latest/guides/hooks.html#understanding-detection-configuration
# Also see https://pyzm.readthedocs.io/en/latest/source/pyzm.html#pyzm.ml.detect_sequence.DetectSequence.detect_stream
# Very important: Make sure final ending brace is indented
stream_sequence = {
        'frame_strategy': 'most_models',
        'frame_set': 'snapshot,alarm',
        'contig_frames_before_error': 5,
        'max_attempts': 3,
        'sleep_between_attempts': 4,
        'resize':800
    }

# Chain of ML models to use
# See https://zmeventnotification.readthedocs.io/en/latest/guides/hooks.html#understanding-detection-configuration
# Also see https://pyzm.readthedocs.io/en/latest/source/pyzm.html#pyzm.ml.detect_sequence.DetectSequence
# Very important: Make sure final ending brace is indented

ml_sequence= {
                'general': {
                        'model_sequence': 'object'
                },
                'object': {
                        'general':{
                                'pattern':'(person|dog|cat)',
                                'same_model_sequence_strategy': 'first' # also 'most', 'most_unique's
                        },
                        'sequence': [{
                                'object_config':'{{base_data_path}}/models/yolov4/yolov4.cfg',
                                'object_weights':'{{base_data_path}}/models/yolov4/yolov4.weights',
                                'object_labels': '{{base_data_path}}/models/yolov4/coco.names',
                                'object_min_confidence': 0.65,
                                'object_framework':'opencv',
                                'object_processor': 'gpu'
                        }]
                }
        }
User avatar
asker
Posts: 1553
Joined: Sun Mar 01, 2015 12:12 pm

Re: ES 6.1.5 (object detection not working)

Post by asker »

repomanz wrote: Wed Jan 06, 2021 4:16 pm Hello - I updated to ES 6.1.5 and seem to be having some struggle with object detection. This was working fine previous to 6.1.5 and I understand the structure of objectconfig.ini has changes but believe I've made correct/updated changes leveraging ml_sequence.

For object detection; using: object, yolo4 model.

I had this working before ES6.1.5 across 3 monitors. The monitors are picking up events / recording but object detection seems to not work yet. I assume it's something with the new objectconfig.ini. Does anyone see something obvious I've goofed here?
At a high level, looks ok. What does 'not working' mean?
Please post debug (not INF) logs on a run and point out what is not working
I no longer work on zmNinja, zmeventnotification, pyzm or mlapi. I may respond on occasion based on my available time/interest.

Please read before posting:
How to set up logging properly
How to troubleshoot and report - ES
How to troubleshoot and report - zmNinja
ES docs
zmNinja docs
Magic919
Posts: 1381
Joined: Wed Sep 18, 2013 6:56 am

Re: ES 6.1.5 (object detection not working)

Post by Magic919 »

It’s worth running the new setup against an older alarm where it worked and capture the debug output.
-
repomanz
Posts: 8
Joined: Wed Jan 06, 2021 4:11 pm

Re: ES 6.1.5 (object detection not working)

Post by repomanz »

asker wrote: Wed Jan 06, 2021 5:51 pm At a high level, looks ok. What does 'not working' mean?
Please post debug (not INF) logs on a run and point out what is not working
In INF log, i see event happening but then this:

Code: Select all

2021-01-06 11:35:15	zmeventnotification		8080	INF	|----> FORK:garage-low-res (1), eid:123 Not sending event end alarm, as we did not send a start alarm for this, or start hook processing failed	zmeventnotification.pl	
2021-01-06 11:35:11	zmeventnotification		8080	INF	|----> FORK:garage-low-res (1), eid:123 Event 123 for Monitor 1 has finished	zmeventnotification.pl	
2021-01-06 11:35:05	zmeventnotification		8080	INF	|----> FORK:garage-low-res (1), eid:123 Not sending over MQTT as notify filters are on_success:all and on_fail:none	zmeventnotification.pl	
2021-01-06 11:35:05	zmeventnotification		8080	INF	|----> FORK:garage-low-res (1), eid:123 Not sending push over API as it is not allowed for event_start	zmeventnotification.pl	
2021-01-06 11:35:01	zmeventnotification		1020	INF	PARENT: New event 123 reported for Monitor:1 (Name:garage-low-res) Motion driveway[last processed eid:]
Which logs would you like for me provide? I'd be happy to enable them.
repomanz
Posts: 8
Joined: Wed Jan 06, 2021 4:11 pm

Re: ES 6.1.5 (object detection not working)

Post by repomanz »

Going through documentation again I think I found the issue. python3 can't import cv2

Code: Select all

root@43f7cbf04308:/# sudo -u www-data /var/lib/zmeventnotification/bin/zm_detect.py --config /etc/zm/objectconfig.ini  --eventid 152 --monitorid 1 --debug
Traceback (most recent call last):
  File "/var/lib/zmeventnotification/bin/zm_detect.py", line 15, in <module>
    import imutils
  File "/usr/local/lib/python3.8/dist-packages/imutils/__init__.py", line 8, in <module>
    from .convenience import translate
  File "/usr/local/lib/python3.8/dist-packages/imutils/convenience.py", line 6, in <module>
    import cv2
ModuleNotFoundError: No module named 'cv2'
root@43f7cbf04308:/# python --version
bash: python: command not found
root@43f7cbf04308:/# python
bash: python: command not found
root@43f7cbf04308:/# pyhton3
bash: pyhton3: command not found

root@43f7cbf04308:/# python3
Python 3.8.5 (default, Jul 28 2020, 12:59:40) 
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'cv2'
repomanz
Posts: 8
Joined: Wed Jan 06, 2021 4:11 pm

Re: ES 6.1.5 (object detection not working)

Post by repomanz »

after going into the docker container and manually adding this:

sudo pip3 install opencv-python

ES/ML is now working.
warcanoid
Posts: 35
Joined: Mon Feb 17, 2020 3:22 pm

Re: ES 6.1.5 (object detection not working)

Post by warcanoid »

I have also problems with object detection not working, manual command:

Code: Select all

root@d39668805f47:/var/lib/zmeventnotification/bin# sudo -u www-data /var/lib/zmeventnotification/bin/zm_detect.py --config /etc/zm/objectconfig.ini  --eventid 1 --monitorid 1 --debug
01/07/21 10:01:39 zmesdetect_m1[2289] INF ZMLog.py:212 [Setting up signal handler for logs]

01/07/21 10:01:39 zmesdetect_m1[2289] INF zm_detect.py:240 [---------| pyzm version:0.3.25, hook version:6.0.5,  ES version:6.1.5 , OpenCV version:4.5.0|------------]

DBG1 [zmesdetect_m1] [secret filename: /etc/zm/secrets.ini]
DBG2 [zmesdetect_m1] [Secret token found in config: !ZM_PORTAL]
DBG2 [zmesdetect_m1] [Secret token found in config: !ZM_API_PORTAL]
DBG2 [zmesdetect_m1] [Secret token found in config: !ZM_USER]
DBG2 [zmesdetect_m1] [Secret token found in config: !ZM_PASSWORD]
DBG1 [zmesdetect_m1] [allowing self-signed certs to work...]
DBG2 [zmesdetect_m1] [[monitor-1] overrides key:detection_sequence with value:object]
DBG2 [zmesdetect_m1] [[monitor-1] overrides key:object_detection_pattern with value:(car|truck|bicycle|motorbike|bus|person|dog|cat)]
01/07/21 12:19:28 zmesdetect_m1[1091] INF zm_detect.py:265 [Importing local classes for Object/Face]

01/07/21 12:19:28 zmesdetect_m1[1091] INF zm_detect.py:288 [Connecting with ZM APIs]

DBG2 [zmesdetect_m1] [API SSL certificate check has been disbled]
DBG1 [zmesdetect_m1] [using username/password for login]
DBG2 [zmesdetect_m1] [Using new token API]
DBG1 [zmesdetect_m1] [Access token expires on:2021-01-07 14:19:29.061450 [7200s]]
DBG1 [zmesdetect_m1] [Refresh token expires on:2021-01-08 12:19:29.062382 [86400s]]
DBG3 [zmesdetect_m1] [No need to relogin as access token still has 119.99997331666667 minutes remaining]
DBG4 [zmesdetect_m1] [make_request called with url=https://gggggggfferI'}]
01/07/21 12:19:29 zmesdetect_m1[1091] FAT zm_detect.py:534 [Unrecoverable error:'ml_sequence' Traceback:Traceback (most recent call last):
  File "/var/lib/zmeventnotification/bin/zm_detect.py", line 531, in <module>
    main_handler()
  File "/var/lib/zmeventnotification/bin/zm_detect.py", line 295, in main_handler
    if g.config['ml_sequence'] and g.config['use_sequence'] == 'yes':
KeyError: 'ml_sequence'
]
Please for help, I am reinstalling for 2day without any success. All was working before update
warcanoid
Posts: 35
Joined: Mon Feb 17, 2020 3:22 pm

Re: ES 6.1.5 (object detection not working)

Post by warcanoid »

this my config:

Code: Select all

# Configuration file for object detection

# NOTE: ALL parameters here can be overriden
# on a per monitor basis if you want. Just
# duplicate it inside the correct [monitor-<num>] section

# You can create your own custom attributes in the [custom] section

[general]

# Please don't change this. It is used by the config upgrade script
version=1.2

# You can now limit the # of detection process
# per target processor. If not specified, default is 1
# Other detection processes will wait to acquire lock

cpu_max_processes=3
tpu_max_processes=1
gpu_max_processes=1

# Time to wait in seconds per processor to be free, before
# erroring out. Default is 120 (2 mins)
cpu_max_lock_wait=100
tpu_max_lock_wait=100
gpu_max_lock_wait=100


#pyzm_overrides={'conf_path':'/etc/zm','log_level_debug':0}
pyzm_overrides={'log_level_debug':5}

# This is an optional file
# If specified, you can specify tokens with secret values in that file
# and onlt refer to the tokens in your main config file
secrets = /etc/zm/secrets.ini

# portal/user/password are needed if you plan on using ZM's legacy
# auth mechanism to get images
portal=!ZM_PORTAL
user=!ZM_USER
password=!ZM_PASSWORD

# api portal is needed if you plan to use tokens to get images
# requires ZM 1.33 or above
api_portal=!ZM_API_PORTAL

allow_self_signed=yes
# if yes, last detection will be stored for monitors
# and bounding boxes that match, along with labels
# will be discarded for new detections. This may be helpful
# in getting rid of static objects that get detected
# due to some motion. 
match_past_detections=no
# The max difference in area between the objects if match_past_detection is on
# can also be specified in px like 300px. Default is 5%. Basically, bounding boxes of the same
# object can slightly differ ever so slightly between detection. Contributor @neillbell put in this PR
# to calculate the difference in areas and based on his tests, 5% worked well. YMMV. Change it if needed.
past_det_max_diff_area=5%

max_detection_size=90%

# sequence of models to run for detection
detection_sequence=object,face,alpr
# if all, then we will loop through all models
# if first then the first success will break out
detection_mode=all

# If you need basic auth to access ZM 
#basic_user=user
#basic_password=password

# base data path for various files the ES+OD needs
# we support in config variable substitution as well
base_data_path=/var/lib/zmeventnotification

# global settings for 
# bestmatch, alarm, snapshot OR a specific frame ID
frame_id=bestmatch

# this is the to resize the image before analysis is done
resize=800
# set to yes, if you want to remove images after analysis
# setting to yes is recommended to avoid filling up space
# keep to no while debugging/inspecting masks
# Note this does NOT delete debug images later
delete_after_analyze=yes

# If yes, will write an image called <filename>-bbox.jpg as well
# which contains the bounding boxes. This has NO relation to 
# write_image_to_zm 
# Typically, if you enable delete_after_analyze you may
# also want to set  write_debug_image to no. 
write_debug_image=no

# if yes, will write an image with bounding boxes
# this needs to be yes to be able to write a bounding box
# image to ZoneMinder that is visible from its console
write_image_to_zm=yes


# Adds percentage to detections
# hog/face shows 100% always
show_percent=yes

# color to be used to draw the polygons you specified
poly_color=(255,255,255)
poly_thickness=2
#import_zm_zones=yes
only_triggered_zm_zones=no

# This section gives you an option to get brief animations 
# of the event, delivered as part of the push notification to mobile devices
# Animations are created only if an object is detected
#
# NOTE: This will DELAY the time taken to send you push notifications
# It will try to first creat the animation, which may take upto a minute
# depending on how soon it gets access to frames. See notes below

[animation]

# If yes, object detection will attempt to create 
# a short GIF file around the object detection frame
# that can be sent via push notifications for instant playback
# Note this required additional software support. Default:no
create_animation=no

# Format of animation burst
# valid options are "mp4", "gif", "mp4,gif"
# Note that gifs will be of a shorter duration
# as they take up much more disk space than mp4
animation_types='mp4,gif'

# default width of animation image. Be cautious when you increase this
# most mobile platforms give a very brief amount of time (in seconds) 
# to download the image.
# Given your ZM instance will be serving the image, it will anyway be slow
# Making the total animation size bigger resulted in the notification not 
# getting an image at all (timed out)
animation_width=640

# When an event is detected, ZM it writes frames a little late
# On top of that, it looks like with caching enabled, the API layer doesn't
# get access to DB records for much longer (around 30 seconds), at least on my 
# system. animation_retry_sleep refers to how long to wait before trying to grab
# frame information if it failed. animation_max_tries defines how many times it 
# will try and retrieve frames before it gives up
animation_retry_sleep=15
animation_max_tries=4

# if animation_types is gif then when can generate a fast preview gif
# every second frame is skipped and the frame rate doubled
# to give quick preview, Default (no)
fast_gif=no

[remote]
# You can now run the machine learning code on a different server
# This frees up your ZM server for other things
# To do this, you need to setup https://github.com/pliablepixels/mlapi
# on your desired server and confiure it with a user. See its instructions
# once set up, you can choose to do object/face recognition via that 
# external serer

# URL that will be used
#ml_gateway=http://192.168.1.183:5000/api/v1
#ml_gateway=http://10.6.1.13:5000/api/v1
#ml_gateway=http://192.168.1.21:5000/api/v1
#ml_gateway=http://10.9.0.2:5000/api/v1
#ml_fallback_local=yes
# API/password for remote gateway
ml_user=!ML_USER
ml_password=!ML_PASSWORD


# config for object
[object]

# If you are using legacy format (use_sequence=no) then these parameters will 
# be used during ML inferencing
object_detection_pattern=(person|car|motorbike|bus|truck|boat)
object_min_confidence=0.3
object_framework=coral_edgetpu
object_processor=tpu
object_weights={{base_data_path}}/models/coral_edgetpu/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
object_labels={{base_data_path}}/models/coral_edgetpu/coco_indexed.names

# If you are using the new ml_sequence format (use_sequence=yes) then 
# you can fiddle with these parameters and look at ml_sequence later
# Note that these can be named anything. You can add custom variables, ad-infinitum

# Google Coral
#tpu_object_weights={{base_data_path}}/models/coral_edgetpu/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
#tpu_object_labels={{base_data_path}}/models/coral_edgetpu/coco_indexed.names
#tpu_object_framework=coral_edgetpu
#tpu_object_processor=tpu
#tpu_min_confidence=0.6

# Yolo v4 on GPU (falls back to CPU if no GPU)
yolo4_object_weights={{base_data_path}}/models/yolov4/yolov4.weights
yolo4_object_labels={{base_data_path}}/models/yolov4/coco.names
yolo4_object_config={{base_data_path}}/models/yolov4/yolov4.cfg
yolo4_object_framework=opencv
yolo4_object_processor=cpu

# Yolo v3 on GPU (falls back to CPU if no GPU)
yolo3_object_weights={{base_data_path}}/models/yolov3/yolov3.weights
yolo3_object_labels={{base_data_path}}/models/yolov3/coco.names
yolo3_object_config={{base_data_path}}/models/yolov3/yolov3.cfg
yolo3_object_framework=opencv
yolo3_object_processor=cpu

# Tiny Yolo V4 on GPU (falls back to CPU if no GPU)
tinyyolo_object_config={{base_data_path}}/models/tinyyolov4/yolov4-tiny.cfg
tinyyolo_object_weights={{base_data_path}}/models/tinyyolov4/yolov4-tiny.weights
tinyyolo_object_labels={{base_data_path}}/models/tinyyolov4/coco.names
tinyyolo_object_framework=opencv
tinyyolo_object_processor=cpu


[face]
face_detection_pattern=.*
known_images_path={{base_data_path}}/known_faces
unknown_images_path={{base_data_path}}/unknown_faces
save_unknown_faces=yes
save_unknown_faces_leeway_pixels=100
face_detection_framework=dlib

# read https://github.com/ageitgey/face_recognition/wiki/Face-Recognition-Accuracy-Problems
# read https://github.com/ageitgey/face_recognition#automatically-find-all-the-faces-in-an-image
# and play around

# quick overview: 
# num_jitters is how many times to distort images 
# upsample_times is how many times to upsample input images (for small faces, for example)
# model can be hog or cnn. cnn may be more accurate, but I haven't found it to be 

face_num_jitters=1
face_model=cnn
face_upsample_times=1

# This is maximum distance of the face under test to the closest matched
# face cluster. The larger this distance, larger the chances of misclassification.
#
face_recog_dist_threshold=0.6
# When we are first training the face recognition model with known faces,
# by default we use hog because we assume you will supply well lit, front facing faces
# However, if you are planning to train with profile photos or hard to see faces, you
# may want to change this to cnn. Note that this increases training time, but training only
# happens once, unless you retrain again by removing the training model
face_train_model=cnn
#if a face doesn't match known names, we will detect it as 'unknown face'
# you can change that to something that suits your personality better ;-)
#unknown_face_name=invader

[alpr]
alpr_detection_pattern=.*
alpr_use_after_detection_only=yes
# Many of the ALPR providers offer both a cloud version
# and local SDK version. Sometimes local SDK format differs from
# the cloud instance. Set this to local or cloud. Default cloud
alpr_api_type=cloud

# -----| If you are using plate recognizer | ------
alpr_service=plate_recognizer
#alpr_service=open_alpr_cmdline

# If you want to host a local SDK https://app.platerecognizer.com/sdk/
#alpr_url=http://192.168.1.21:8080/alpr
# Plate recog replace with your api key
alpr_key=!PLATEREC_ALPR_KEY
# if yes, then it will log usage statistics of the ALPR service
platerec_stats=yes
# If you want to specify regions. See http://docs.platerecognizer.com/#regions-supported
#platerec_regions=['us','cn','kr']
# minimal confidence for actually detecting a plate
platerec_min_dscore=0.1
# minimal confidence for the translated text
platerec_min_score=0.2


# ----| If you are using openALPR |-----
#alpr_service=open_alpr
#alpr_key=!OPENALPR_ALPR_KEY

# For an explanation of params, see http://doc.openalpr.com/api/?api=cloudapi
#openalpr_recognize_vehicle=1
#openalpr_country=us
#openalpr_state=ca
# openalpr returns percents, but we convert to between 0 and 1
#openalpr_min_confidence=0.3

# ----| If you are using openALPR command line |-----

openalpr_cmdline_binary=alpr

# Do an alpr -help to see options, plug them in here
# like say '-j -p ca -c US' etc.
# keep the -j because its JSON

# Note that alpr_pattern is honored
# For the rest, just stuff them in the cmd line options

openalpr_cmdline_params=-j -d
openalpr_cmdline_min_confidence=0.3


## Monitor specific settings


# Examples:
# Let's assume your monitor ID is 999
[monitor-5]
# my driveway
match_past_detections=no
#wait=5
object_detection_pattern=(car)

# Advanced example - here we want anything except potted plant
# exclusion in regular expressions is not
# as straightforward as you may think, so 
# follow this pattern
# object_detection_pattern = ^(?!object1|object2|objectN)
# the characters in front implement what is 
# called a negative look ahead

# object_detection_pattern=^(?!potted plant|pottedplant|bench|broccoli)
#alpr_detection_pattern=^(.*x11)
#delete_after_analyze=no
#detection_pattern=.*
#import_zm_zones=yes

# polygon areas where object detection will be done.
# You can name them anything except the keywords defined in the optional
# params below. You can put as many polygons as you want per [monitor-<mid>]
# (see examples).

#my_driveway=306,356 1003,341 1074,683 154,715

# You are now allowed to specify detection pattern per zone
# the format is <polygonname>_zone_detection_pattern=<regexp>
# So if your polygon is called my_driveway, its associated
# detection pattern will be my_driveway_zone_detection_pattern
# If none is specified, the value in object_detection_pattern 
# will be used
# This also applies to ZM zones. Let's assume you have 
# import_zm_zones=yes and let's suppose you have a zone in ZM
# called Front_Door. In that case, all you need to do is put in a 
# front_door_zone_detection_pattern=(person|car) here
#
# NOTE: ZM Zones are converted to lowercase, and spaces are replaced
# with underscores@3

#my_driveway_zone_detection_pattern=(person)
#some_other_area=0,0 200,300 700,900
# use license plate recognition for my driveway
# see alpr section later for more data needed
resize=no
detection_sequence=object

[ml]
# When enabled, you can specify complex ML inferencing logic in ml_sequence
# Anything specified in ml_sequence will override any other ml attributes

# Also, when enabled, stream_sequence will override any other frame related
# attributes 
use_sequence = yes

# if enabled, will not grab exclusive locks before running inferencing
# locking seems to cause issues on some unique file systems
disable_locks= no

# Chain of frames 
# See https://zmeventnotification.readthedocs.io/en/latest/guides/hooks.html#understanding-detection-configuration
# Also see https://pyzm.readthedocs.io/en/latest/source/pyzm.html#pyzm.ml.detect_sequence.DetectSequence.detect_stream
# Very important: Make sure final ending brace is indented 
stream_sequence = {
        'frame_strategy': 'most_models',
        'frame_set': 'snapshot,alarm',
        'contig_frames_before_error': 5,
        'max_attempts': 3,
        'sleep_between_attempts': 4,
		'resize':800

    }

# Chain of ML models to use
# See https://zmeventnotification.readthedocs.io/en/latest/guides/hooks.html#understanding-detection-configuration
# Also see https://pyzm.readthedocs.io/en/latest/source/pyzm.html#pyzm.ml.detect_sequence.DetectSequence
# Very important: Make sure final ending brace is indented 
ml_sequence= {
		'general': {
			'model_sequence': 'object, object',
            'disable_locks': '{{disable_locks}}',

		},
		'object': {
			'general':{
				'pattern':'{{object_detection_pattern}}',
				'same_model_sequence_strategy': 'first' # also 'most', 'most_unique's
			},
			'sequence': [{
				'object_config':'{{yolo3_object_config}}',
				'object_weights':'{{yolo3_object_weights}}',
				'object_labels': '{{yolo3_object_labels}}',
				'object_min_confidence': {{object_min_confidence}},
				'object_framework':'{{yolo3_object_framework}}',
				'object_processor': '{{yolo3_object_processor}}',
				'cpu_max_processes': {{cpu_max_processes}},
				'cpu_max_lock_wait': {{cpu_max_lock_wait}},
                'max_detection_size':'{{max_detection_size}}'

                 }]
		}
	}
repomanz
Posts: 8
Joined: Wed Jan 06, 2021 4:11 pm

Re: ES 6.1.5 (object detection not working)

Post by repomanz »

it's not clear to me in the template if the parameters above ml sequence are leveraged.

this is working for me.

Code: Select all

ml_sequence= {
                'general': {
                        'model_sequence': 'object'
                },
                'object': {
                        'general':{
                                'pattern':'(person|dog|cat)',
                                'same_model_sequence_strategy': 'first' # also 'most', 'most_unique's
                        },
                        'sequence': [{
                                'object_config':'{{base_data_path}}/models/yolov4/yolov4.cfg',
                                'object_weights':'{{base_data_path}}/models/yolov4/yolov4.weights',
                                'object_labels': '{{base_data_path}}/models/yolov4/coco.names',
                                'object_min_confidence': 0.65,
                                'object_framework':'opencv',
                                'object_processor': 'gpu'
                        }]
                }
        }
warcanoid
Posts: 35
Joined: Mon Feb 17, 2020 3:22 pm

Re: ES 6.1.5 (object detection not working)

Post by warcanoid »

Thank you for help, but still same error :(

So the script wont pick up ml_sequence settings!?
Last edited by warcanoid on Thu Jan 07, 2021 4:59 pm, edited 1 time in total.
repomanz
Posts: 8
Joined: Wed Jan 06, 2021 4:11 pm

Re: ES 6.1.5 (object detection not working)

Post by repomanz »

full objectconfig.ini

Code: Select all

# Configuration file for object detection

# NOTE: ALL parameters here can be overriden
# on a per monitor basis if you want. Just
# duplicate it inside the correct [monitor-<num>] section

# You can create your own custom attributes in the [custom] section

[general]

# Please don't change this. It is used by the config upgrade script
version=1.2

# You can now limit the # of detection process
# per target processor. If not specified, default is 1
# Other detection processes will wait to acquire lock

cpu_max_processes=3
tpu_max_processes=1
gpu_max_processes=1

# Time to wait in seconds per processor to be free, before
# erroring out. Default is 120 (2 mins)
cpu_max_lock_wait=100
tpu_max_lock_wait=100
gpu_max_lock_wait=100


#pyzm_overrides={'conf_path':'/etc/zm','log_level_debug':0}
pyzm_overrides={'log_level_debug':5}

# This is an optional file
# If specified, you can specify tokens with secret values in that file
# and onlt refer to the tokens in your main config file
secrets = /etc/zm/secrets.ini

# portal/user/password are needed if you plan on using ZM's legacy
# auth mechanism to get images
portal=!ZM_PORTAL
user=!ZM_USER
password=!ZM_PASSWORD

# api portal is needed if you plan to use tokens to get images
# requires ZM 1.33 or above
api_portal=!ZM_API_PORTAL

allow_self_signed=yes
# if yes, last detection will be stored for monitors
# and bounding boxes that match, along with labels
# will be discarded for new detections. This may be helpful
# in getting rid of static objects that get detected
# due to some motion. 
match_past_detections=no
# The max difference in area between the objects if match_past_detection is on
# can also be specified in px like 300px. Default is 5%. Basically, bounding boxes of the same
# object can slightly differ ever so slightly between detection. Contributor @neillbell put in this PR
# to calculate the difference in areas and based on his tests, 5% worked well. YMMV. Change it if needed.
past_det_max_diff_area=5%

max_detection_size=90%

# sequence of models to run for detection
detection_sequence=object,face,alpr
# if all, then we will loop through all models
# if first then the first success will break out
detection_mode=object

# If you need basic auth to access ZM 
#basic_user=user
#basic_password=password

# base data path for various files the ES+OD needs
# we support in config variable substitution as well
base_data_path=/var/lib/zmeventnotification

# global settings for 
# bestmatch, alarm, snapshot OR a specific frame ID
frame_id=bestmatch

# this is the to resize the image before analysis is done
resize=800
# set to yes, if you want to remove images after analysis
# setting to yes is recommended to avoid filling up space
# keep to no while debugging/inspecting masks
# Note this does NOT delete debug images later
delete_after_analyze=yes

# If yes, will write an image called <filename>-bbox.jpg as well
# which contains the bounding boxes. This has NO relation to 
# write_image_to_zm 
# Typically, if you enable delete_after_analyze you may
# also want to set  write_debug_image to no. 
write_debug_image=no

# if yes, will write an image with bounding boxes
# this needs to be yes to be able to write a bounding box
# image to ZoneMinder that is visible from its console
write_image_to_zm=yes


# Adds percentage to detections
# hog/face shows 100% always
show_percent=yes

# color to be used to draw the polygons you specified
poly_color=(255,255,255)
poly_thickness=2
#import_zm_zones=yes
only_triggered_zm_zones=no

# This section gives you an option to get brief animations 
# of the event, delivered as part of the push notification to mobile devices
# Animations are created only if an object is detected
#
# NOTE: This will DELAY the time taken to send you push notifications
# It will try to first creat the animation, which may take upto a minute
# depending on how soon it gets access to frames. See notes below

[animation]

# If yes, object detection will attempt to create 
# a short GIF file around the object detection frame
# that can be sent via push notifications for instant playback
# Note this required additional software support. Default:no
create_animation=no

# Format of animation burst
# valid options are "mp4", "gif", "mp4,gif"
# Note that gifs will be of a shorter duration
# as they take up much more disk space than mp4
animation_types='mp4,gif'

# default width of animation image. Be cautious when you increase this
# most mobile platforms give a very brief amount of time (in seconds) 
# to download the image.
# Given your ZM instance will be serving the image, it will anyway be slow
# Making the total animation size bigger resulted in the notification not 
# getting an image at all (timed out)
animation_width=640

# When an event is detected, ZM it writes frames a little late
# On top of that, it looks like with caching enabled, the API layer doesn't
# get access to DB records for much longer (around 30 seconds), at least on my 
# system. animation_retry_sleep refers to how long to wait before trying to grab
# frame information if it failed. animation_max_tries defines how many times it 
# will try and retrieve frames before it gives up
animation_retry_sleep=15
animation_max_tries=4

# if animation_types is gif then when can generate a fast preview gif
# every second frame is skipped and the frame rate doubled
# to give quick preview, Default (no)
fast_gif=no

[remote]
# You can now run the machine learning code on a different server
# This frees up your ZM server for other things
# To do this, you need to setup https://github.com/pliablepixels/mlapi
# on your desired server and confiure it with a user. See its instructions
# once set up, you can choose to do object/face recognition via that 
# external serer

# URL that will be used
#ml_gateway=http://192.168.1.183:5000/api/v1
#ml_gateway=http://10.6.1.13:5000/api/v1
#ml_gateway=http://192.168.1.21:5000/api/v1
#ml_gateway=http://10.9.0.2:5000/api/v1
#ml_fallback_local=yes
# API/password for remote gateway
ml_user=!ML_USER
ml_password=!ML_PASSWORD


# config for object
[object]

# If you are using legacy format (use_sequence=no) then these parameters will 
# be used during ML inferencing
object_detection_pattern=(person|car|motorbike|bus|truck|boat)
object_min_confidence=0.3
object_framework=coral_edgetpu
object_processor=tpu
object_weights={{base_data_path}}/models/coral_edgetpu/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
object_labels={{base_data_path}}/models/coral_edgetpu/coco_indexed.names

# If you are using the new ml_sequence format (use_sequence=yes) then 
# you can fiddle with these parameters and look at ml_sequence later
# Note that these can be named anything. You can add custom variables, ad-infinitum

# Google Coral
tpu_object_weights={{base_data_path}}/models/coral_edgetpu/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
tpu_object_labels={{base_data_path}}/models/coral_edgetpu/coco_indexed.names
tpu_object_framework=coral_edgetpu
tpu_object_processor=tpu
tpu_min_confidence=0.6

# Yolo v4 on GPU (falls back to CPU if no GPU)
yolo4_object_weights={{base_data_path}}/models/yolov4/yolov4.weights
yolo4_object_labels={{base_data_path}}/models/yolov4/coco.names
yolo4_object_config={{base_data_path}}/models/yolov4/yolov4.cfg
yolo4_object_framework=opencv
yolo4_object_processor=gpu

# Yolo v3 on GPU (falls back to CPU if no GPU)
yolo3_object_weights={{base_data_path}}/models/yolov3/yolov3.weights
yolo3_object_labels={{base_data_path}}/models/yolov3/coco.names
yolo3_object_config={{base_data_path}}/models/yolov3/yolov3.cfg
yolo3_object_framework=opencv
yolo3_object_processor=gpu

# Tiny Yolo V4 on GPU (falls back to CPU if no GPU)
tinyyolo_object_config={{base_data_path}}/models/tinyyolov4/yolov4-tiny.cfg
tinyyolo_object_weights={{base_data_path}}/models/tinyyolov4/yolov4-tiny.weights
tinyyolo_object_labels={{base_data_path}}/models/tinyyolov4/coco.names
tinyyolo_object_framework=opencv
tinyyolo_object_processor=gpu


[face]
face_detection_pattern=.*
known_images_path={{base_data_path}}/known_faces
unknown_images_path={{base_data_path}}/unknown_faces
save_unknown_faces=yes
save_unknown_faces_leeway_pixels=100
face_detection_framework=dlib

# read https://github.com/ageitgey/face_recognition/wiki/Face-Recognition-Accuracy-Problems
# read https://github.com/ageitgey/face_recognition#automatically-find-all-the-faces-in-an-image
# and play around

# quick overview: 
# num_jitters is how many times to distort images 
# upsample_times is how many times to upsample input images (for small faces, for example)
# model can be hog or cnn. cnn may be more accurate, but I haven't found it to be 

face_num_jitters=1
face_model=cnn
face_upsample_times=1

# This is maximum distance of the face under test to the closest matched
# face cluster. The larger this distance, larger the chances of misclassification.
#
face_recog_dist_threshold=0.6
# When we are first training the face recognition model with known faces,
# by default we use hog because we assume you will supply well lit, front facing faces
# However, if you are planning to train with profile photos or hard to see faces, you
# may want to change this to cnn. Note that this increases training time, but training only
# happens once, unless you retrain again by removing the training model
face_train_model=cnn
#if a face doesn't match known names, we will detect it as 'unknown face'
# you can change that to something that suits your personality better ;-)
#unknown_face_name=invader

[alpr]
alpr_detection_pattern=.*
alpr_use_after_detection_only=yes
# Many of the ALPR providers offer both a cloud version
# and local SDK version. Sometimes local SDK format differs from
# the cloud instance. Set this to local or cloud. Default cloud
alpr_api_type=cloud

# -----| If you are using plate recognizer | ------
alpr_service=plate_recognizer
#alpr_service=open_alpr_cmdline

# If you want to host a local SDK https://app.platerecognizer.com/sdk/
#alpr_url=http://192.168.1.21:8080/alpr
# Plate recog replace with your api key
alpr_key=!PLATEREC_ALPR_KEY
# if yes, then it will log usage statistics of the ALPR service
platerec_stats=yes
# If you want to specify regions. See http://docs.platerecognizer.com/#regions-supported
#platerec_regions=['us','cn','kr']
# minimal confidence for actually detecting a plate
platerec_min_dscore=0.1
# minimal confidence for the translated text
platerec_min_score=0.2


# ----| If you are using openALPR |-----
#alpr_service=open_alpr
#alpr_key=!OPENALPR_ALPR_KEY

# For an explanation of params, see http://doc.openalpr.com/api/?api=cloudapi
#openalpr_recognize_vehicle=1
#openalpr_country=us
#openalpr_state=ca
# openalpr returns percents, but we convert to between 0 and 1
#openalpr_min_confidence=0.3

# ----| If you are using openALPR command line |-----

openalpr_cmdline_binary=alpr

# Do an alpr -help to see options, plug them in here
# like say '-j -p ca -c US' etc.
# keep the -j because its JSON

# Note that alpr_pattern is honored
# For the rest, just stuff them in the cmd line options

openalpr_cmdline_params=-j -d
openalpr_cmdline_min_confidence=0.3


## Monitor specific settings


# Examples:
# Let's assume your monitor ID is 999
[monitor-999]
# my driveway
match_past_detections=no
wait=5
object_detection_pattern=(person)

# Advanced example - here we want anything except potted plant
# exclusion in regular expressions is not
# as straightforward as you may think, so 
# follow this pattern
# object_detection_pattern = ^(?!object1|object2|objectN)
# the characters in front implement what is 
# called a negative look ahead

# object_detection_pattern=^(?!potted plant|pottedplant|bench|broccoli)
#alpr_detection_pattern=^(.*x11)
#delete_after_analyze=no
#detection_pattern=.*
#import_zm_zones=yes

# polygon areas where object detection will be done.
# You can name them anything except the keywords defined in the optional
# params below. You can put as many polygons as you want per [monitor-<mid>]
# (see examples).

my_driveway=306,356 1003,341 1074,683 154,715

# You are now allowed to specify detection pattern per zone
# the format is <polygonname>_zone_detection_pattern=<regexp>
# So if your polygon is called my_driveway, its associated
# detection pattern will be my_driveway_zone_detection_pattern
# If none is specified, the value in object_detection_pattern 
# will be used
# This also applies to ZM zones. Let's assume you have 
# import_zm_zones=yes and let's suppose you have a zone in ZM
# called Front_Door. In that case, all you need to do is put in a 
# front_door_zone_detection_pattern=(person|car) here
#
# NOTE: ZM Zones are converted to lowercase, and spaces are replaced
# with underscores@3

my_driveway_zone_detection_pattern=(person)
some_other_area=0,0 200,300 700,900
# use license plate recognition for my driveway
# see alpr section later for more data needed
resize=no
detection_sequence=object,alpr


[ml]
# When enabled, you can specify complex ML inferencing logic in ml_sequence
# Anything specified in ml_sequence will override any other ml attributes

# Also, when enabled, stream_sequence will override any other frame related
# attributes 
use_sequence = yes

# if enabled, will not grab exclusive locks before running inferencing
# locking seems to cause issues on some unique file systems
disable_locks= no

# Chain of frames 
# See https://zmeventnotification.readthedocs.io/en/latest/guides/hooks.html#understanding-detection-configuration
# Also see https://pyzm.readthedocs.io/en/latest/source/pyzm.html#pyzm.ml.detect_sequence.DetectSequence.detect_stream
# Very important: Make sure final ending brace is indented 
stream_sequence = {
        'frame_strategy': 'most_models',
        'frame_set': 'snapshot,alarm',
        'contig_frames_before_error': 5,
        'max_attempts': 3,
        'sleep_between_attempts': 4,
	'resize':800
    }

# Chain of ML models to use
# See https://zmeventnotification.readthedocs.io/en/latest/guides/hooks.html#understanding-detection-configuration
# Also see https://pyzm.readthedocs.io/en/latest/source/pyzm.html#pyzm.ml.detect_sequence.DetectSequence
# Very important: Make sure final ending brace is indented 

ml_sequence= {
                'general': {
                        'model_sequence': 'object'
                },
                'object': {
                        'general':{
                                'pattern':'(person|dog|cat)',
                                'same_model_sequence_strategy': 'first' # also 'most', 'most_unique's
                        },
                        'sequence': [{
                                'object_config':'{{base_data_path}}/models/yolov4/yolov4.cfg',
                                'object_weights':'{{base_data_path}}/models/yolov4/yolov4.weights',
                                'object_labels': '{{base_data_path}}/models/yolov4/coco.names',
                                'object_min_confidence': 0.65,
                                'object_framework':'opencv',
                                'object_processor': 'gpu'
                        }]
                }
        }
warcanoid
Posts: 35
Joined: Mon Feb 17, 2020 3:22 pm

Re: ES 6.1.5 (object detection not working)

Post by warcanoid »

Copied your config, changed only gpu to cpu on end, same error :evil: :x :(
Minglarn
Posts: 7
Joined: Mon Sep 14, 2020 7:57 am

Re: ES 6.1.5 (object detection not working)

Post by Minglarn »

repomanz wrote: Wed Jan 06, 2021 7:58 pm after going into the docker container and manually adding this:

sudo pip3 install opencv-python

ES/ML is now working.
Dont work (Dlanonds docker version).
All I get is:

Code: Select all

ModuleNotFoundError: No module named 'skbuild'
warcanoid
Posts: 35
Joined: Mon Feb 17, 2020 3:22 pm

Re: ES 6.1.5 (object detection not working)

Post by warcanoid »

warcanoid wrote: Thu Jan 07, 2021 9:25 am I have also problems with object detection not working, manual command:

Code: Select all

root@d39668805f47:/var/lib/zmeventnotification/bin# sudo -u www-data /var/lib/zmeventnotification/bin/zm_detect.py --config /etc/zm/objectconfig.ini  --eventid 1 --monitorid 1 --debug
01/07/21 10:01:39 zmesdetect_m1[2289] INF ZMLog.py:212 [Setting up signal handler for logs]

01/07/21 10:01:39 zmesdetect_m1[2289] INF zm_detect.py:240 [---------| pyzm version:0.3.25, hook version:6.0.5,  ES version:6.1.5 , OpenCV version:4.5.0|------------]

DBG1 [zmesdetect_m1] [secret filename: /etc/zm/secrets.ini]
DBG2 [zmesdetect_m1] [Secret token found in config: !ZM_PORTAL]
DBG2 [zmesdetect_m1] [Secret token found in config: !ZM_API_PORTAL]
DBG2 [zmesdetect_m1] [Secret token found in config: !ZM_USER]
DBG2 [zmesdetect_m1] [Secret token found in config: !ZM_PASSWORD]
DBG1 [zmesdetect_m1] [allowing self-signed certs to work...]
DBG2 [zmesdetect_m1] [[monitor-1] overrides key:detection_sequence with value:object]
DBG2 [zmesdetect_m1] [[monitor-1] overrides key:object_detection_pattern with value:(car|truck|bicycle|motorbike|bus|person|dog|cat)]
01/07/21 12:19:28 zmesdetect_m1[1091] INF zm_detect.py:265 [Importing local classes for Object/Face]

01/07/21 12:19:28 zmesdetect_m1[1091] INF zm_detect.py:288 [Connecting with ZM APIs]

DBG2 [zmesdetect_m1] [API SSL certificate check has been disbled]
DBG1 [zmesdetect_m1] [using username/password for login]
DBG2 [zmesdetect_m1] [Using new token API]
DBG1 [zmesdetect_m1] [Access token expires on:2021-01-07 14:19:29.061450 [7200s]]
DBG1 [zmesdetect_m1] [Refresh token expires on:2021-01-08 12:19:29.062382 [86400s]]
DBG3 [zmesdetect_m1] [No need to relogin as access token still has 119.99997331666667 minutes remaining]
DBG4 [zmesdetect_m1] [make_request called with url=https://gggggggfferI'}]
01/07/21 12:19:29 zmesdetect_m1[1091] FAT zm_detect.py:534 [Unrecoverable error:'ml_sequence' Traceback:Traceback (most recent call last):
  File "/var/lib/zmeventnotification/bin/zm_detect.py", line 531, in <module>
    main_handler()
  File "/var/lib/zmeventnotification/bin/zm_detect.py", line 295, in main_handler
    if g.config['ml_sequence'] and g.config['use_sequence'] == 'yes':
KeyError: 'ml_sequence'
]
Please for help, I am reinstalling for 2day without any success. All was working before update
I get this error if i enable use_hooks=yes
If I set use_hooks=no I get notification without picture and detected object on picture.
also get error:
2021-01-08 08:40:16 web_php 628 FAT File /mnt/zm/5/2021-01-08/1160/objdetect.jpg does not exist. Please make sure store_frame_in_zm is enabled in the object detection config /usr/share/zoneminder/www/views/image.php

Something is broken in my object detection?

I have deleted and reinstalled everything, but soon as I enable hooks it stoops working. I have 3 different objectconfig.ini files(from other user working config) but all gives the samer error! It should be working with old config when I disable use_sequence=no !!?? It gives same error

This line give error:

Code: Select all

   if g.config['ml_sequence'] and g.config['use_sequence'] == 'yes':
why???
Magic919
Posts: 1381
Joined: Wed Sep 18, 2013 6:56 am

Re: ES 6.1.5 (object detection not working)

Post by Magic919 »

Maybe this

Code: Select all

hook version:6.0.5,  ES version:6.1.5
-
Post Reply