I am lost in the configuration of zmeventnotification

Discussion topics related to mobile applications and ZoneMinder Event Server (including machine learning)
titof2375
Posts: 66
Joined: Sat Dec 12, 2020 2:32 pm

Re: I am lost in the configuration of zmeventnotification

Post by titof2375 »

for the ssh command nothing, I went in front of the camera and it's the log, I'm sorry, I completely forgot to mark it
tsp84
Posts: 227
Joined: Thu Dec 24, 2020 4:04 am

Re: I am lost in the configuration of zmeventnotification

Post by tsp84 »

Are you able to copy and paste the output instead of screenshots?

We need to take a look at all your .ini files.

Code: Select all

cat /etc/zm/objectconfig.ini
cat /etc/zm/zmeventnotification.ini
cat /etc/zm/secrets.ini
Do you know hot to use Pastebin or GitHub gists?
titof2375
Posts: 66
Joined: Sat Dec 12, 2020 2:32 pm

Re: I am lost in the configuration of zmeventnotification

Post by titof2375 »

Code: Select all

Last login: Thu Mar 18 17:54:17 2021 from 192.168.1.58
root@ZoneMinder:~# cat /etc/zm/objectconfig.ini
# Configuration file for object 

# NOTE: ALL parameters here can be overriden
# on a per monitor basis if you want. Just
# duplicate it inside the correct [monitor-<num>] section

# You can create your own custom attributes in the [custom] section

[general]

# Please don't change this. It is used by the config upgrade script
version=1.2

# You can now limit the # of detection process 
# per target processor. If not specified, default is 1
# Other detection processes will wait to acquire lock

cpu_max_processes=3
tpu_max_processes=1
gpu_max_processes=1

# Time to wait in seconds per processor to be free, before
# erroring out. Default is 120 (2 mins)
cpu_max_lock_wait=100
tpu_max_lock_wait=100
gpu_max_lock_wait=100


#pyzm_overrides={'conf_path':'/etc/zm','log_level_debug':0}
pyzm_overrides={'log_level_debug':5}

# This is an optional file
# If specified, you can specify tokens with secret values in that file
# and onlt refer to the tokens in your main config file
secrets = /etc/zm/secrets.ini

# portal/user/password are needed if you plan on using ZM's legacy
# auth mechanism to get images
portal=https://127.0.0.1/zm
user=admin
password=*****

# api portal is needed if you plan to use tokens to get images
# requires ZM 1.33 or above
api_portal=https://127.0.0.1/zm

allow_self_signed=yes
# if yes, last detection will be stored for monitors
# and bounding boxes that match, along with labels
# will be discarded for new detections. This may be helpful
# in getting rid of static objects that get detected
# due to some motion.
match_past_detections=no
# The max difference in area between the objects if match_past_detection is on
# can also be specified in px like 300px. Default is 5%. Basically, bounding box                                                                                        es of the same
# object can slightly differ ever so slightly between detection. Contributor @ne                                                                                        illbell put in this PR
# to calculate the difference in areas and based on his tests, 5% worked well. Y                                                                                        MMV. Change it if needed.
past_det_max_diff_area=5%

max_detection_size=90%

# sequence of models to run for detection
detection_sequence=object,face,alpr
# if all, then we will loop through all models
# if first then the first success will break out
detection_mode=all

# If you need basic auth to access ZM
#basic_user=admin
#basic_password=******

# base data path for various files the ES+OD needs
# we support in config variable substitution as well
base_data_path=/var/lib/zmeventnotification

# global settings for
# bestmatch, alarm, snapshot OR a specific frame ID
frame_id=bestmatch

# this is the to resize the image before analysis is done
resize=800
# set to yes, if you want to remove images after analysis
# setting to yes is recommended to avoid filling up space
# keep to no while debugging/inspecting masks
# Note this does NOT delete debug images later
delete_after_analyze=yes

# If yes, will write an image called <filename>-bbox.jpg as well
# which contains the bounding boxes. This has NO relation to
# write_image_to_zm
# Typically, if you enable delete_after_analyze you may
# also want to set  write_debug_image to no.
write_debug_image=no

# if yes, will write an image with bounding boxes
# this needs to be yes to be able to write a bounding box
# image to ZoneMinder that is visible from its console
write_image_to_zm=yes


# Adds percentage to detections
# hog/face shows 100% always
show_percent=yes

# color to be used to draw the polygons you specified
poly_color=(255,255,255)
poly_thickness=2
#import_zm_zones=yes
only_triggered_zm_zones=no

# This section gives you an option to get brief animations
# of the event, delivered as part of the push notification to mobile devices
# Animations are created only if an object is detected
#
# NOTE: This will DELAY the time taken to send you push notifications
# It will try to first creat the animation, which may take upto a minute
# depending on how soon it gets access to frames. See notes below

[animation]

# If yes, object detection will attempt to create
# a short GIF file around the object detection frame
# that can be sent via push notifications for instant playback
# Note this required additional software support. Default:no
create_animation=no

# Format of animation burst
# valid options are "mp4", "gif", "mp4,gif"
# Note that gifs will be of a shorter duration
# as they take up much more disk space than mp4
animation_types='mp4,gif'

# default width of animation image. Be cautious when you increase this
# most mobile platforms give a very brief amount of time (in seconds)
# to download the image.
# Given your ZM instance will be serving the image, it will anyway be slow
# Making the total animation size bigger resulted in the notification not
# getting an image at all (timed out)
animation_width=640

# When an event is detected, ZM it writes frames a little late
# On top of that, it looks like with caching enabled, the API layer doesn't
# get access to DB records for much longer (around 30 seconds), at least on my
# system. animation_retry_sleep refers to how long to wait before trying to grab
# frame information if it failed. animation_max_tries defines how many times it
# will try and retrieve frames before it gives up
animation_retry_sleep=15
animation_max_tries=4

# if animation_types is gif then when can generate a fast preview gif
# every second frame is skipped and the frame rate doubled
# to give quick preview, Default (no)
fast_gif=no

[remote]
# You can now run the machine learning code on a different server
# This frees up your ZM server for other things
# To do this, you need to setup https://github.com/pliablepixels/mlapi
# on your desired server and confiure it with a user. See its instructions
# once set up, you can choose to do object/face recognition via that
# external serer

# URL that will be used
#ml_gateway=http://192.168.1.183:5000/api/v1
#ml_gateway=http://10.6.1.13:5000/api/v1
#ml_gateway=http://192.168.1.21:5000/api/v1
#ml_gateway=http://10.9.0.2:5000/api/v1
#ml_fallback_local=yes
# API/password for remote gateway
ml_user=!ML_USER
ml_password=!ML_PASSWORD


# config for object
[object]

# If you are using legacy format (use_sequence=no) then these parameters will
# be used during ML inferencing
object_detection_pattern=(person|car|motorbike|bus|truck|boat)
object_min_confidence=0.3
object_framework=coral_edgetpu
object_processor=tpu
object_weights={{base_data_path}}/models/coral_edgetpu/ssd_mobilenet_v2_coco_qua                                                                                        nt_postprocess_edgetpu.tflite
object_labels={{base_data_path}}/models/coral_edgetpu/coco_indexed.names

# If you are using the new ml_sequence format (use_sequence=yes) then
# you can fiddle with these parameters and look at ml_sequence later
# Note that these can be named anything. You can add custom variables, ad-infini                                                                                        tum

# Google Coral
# The mobiledet model came out in Nov 2020 and is supposed to be faster and more                                                                                         accurate but YMMV
tpu_object_weights_mobiledet={{base_data_path}}/models/coral_edgetpu/ssdlite_mob                                                                                        iledet_coco_qat_postprocess_edgetpu.tflite
tpu_object_weights_mobilenet={{base_data_path}}/models/coral_edgetpu/ssd_mobilen                                                                                        et_v2_coco_quant_postprocess_edgetpu.tflite
tpu_object_labels={{base_data_path}}/models/coral_edgetpu/coco_indexed.names
tpu_object_framework=coral_edgetpu
tpu_object_processor=tpu
tpu_min_confidence=0.6

# Yolo v4 on GPU (falls back to CPU if no GPU)
yolo4_object_weights={{base_data_path}}/models/yolov4/yolov4.weights
yolo4_object_labels={{base_data_path}}/models/yolov4/coco.names
yolo4_object_config={{base_data_path}}/models/yolov4/yolov4.cfg
yolo4_object_framework=opencv
yolo4_object_processor=gpu

# Yolo v3 on GPU (falls back to CPU if no GPU)
yolo3_object_weights={{base_data_path}}/models/yolov3/yolov3.weights
yolo3_object_labels={{base_data_path}}/models/yolov3/coco.names
yolo3_object_config={{base_data_path}}/models/yolov3/yolov3.cfg
yolo3_object_framework=opencv
yolo3_object_processor=gpu

# Tiny Yolo V4 on GPU (falls back to CPU if no GPU)
tinyyolo_object_config={{base_data_path}}/models/tinyyolov4/yolov4-tiny.cfg
tinyyolo_object_weights={{base_data_path}}/models/tinyyolov4/yolov4-tiny.weights
tinyyolo_object_labels={{base_data_path}}/models/tinyyolov4/coco.names
tinyyolo_object_framework=opencv
tinyyolo_object_processor=gpu


[face]
face_detection_pattern=.*
known_images_path={{base_data_path}}/known_faces
unknown_images_path={{base_data_path}}/unknown_faces
save_unknown_faces=yes
save_unknown_faces_leeway_pixels=100
face_detection_framework=dlib

# read https://github.com/ageitgey/face_recognition/wiki/Face-Recognition-Accura                                                                                        cy-Problems
# read https://github.com/ageitgey/face_recognition#automatically-find-all-the-f                                                                                        aces-in-an-image
# and play around

# quick overview:
# num_jitters is how many times to distort images
# upsample_times is how many times to upsample input images (for small faces, fo                                                                                        r example)
# model can be hog or cnn. cnn may be more accurate, but I haven't found it to b                                                                                        e

face_num_jitters=1
face_model=cnn
face_upsample_times=1

# This is maximum distance of the face under test to the closest matched
# face cluster. The larger this distance, larger the chances of misclassificatio                                                                                        n.
#
face_recog_dist_threshold=0.6
# When we are first training the face recognition model with known faces,
# by default we use hog because we assume you will supply well lit, front facing                                                                                         faces
# However, if you are planning to train with profile photos or hard to see faces                                                                                        , you
# may want to change this to cnn. Note that this increases training time, but tr                                                                                        aining only
# happens once, unless you retrain again by removing the training model
face_train_model=cnn
#if a face doesn't match known names, we will detect it as 'unknown face'
# you can change that to something that suits your personality better ;-)
#unknown_face_name=invader

[alpr]
alpr_detection_pattern=.*
alpr_use_after_detection_only=yes
# Many of the ALPR providers offer both a cloud version
# and local SDK version. Sometimes local SDK format differs from
# the cloud instance. Set this to local or cloud. Default cloud
alpr_api_type=cloud

# -----| If you are using plate recognizer | ------
alpr_service=plate_recognizer
#alpr_service=open_alpr_cmdline

# If you want to host a local SDK https://app.platerecognizer.com/sdk/
#alpr_url=http://192.168.1.21:8080/alpr
# Plate recog replace with your api key
alpr_key=!PLATEREC_ALPR_KEY
# if yes, then it will log usage statistics of the ALPR service
platerec_stats=yes
# If you want to specify regions. See http://docs.platerecognizer.com/#regions-s                                                                                        upported
#platerec_regions=['us','cn','kr']
# minimal confidence for actually detecting a plate
platerec_min_dscore=0.1
# minimal confidence for the translated text
platerec_min_score=0.2


# ----| If you are using openALPR |-----
#alpr_service=open_alpr
#alpr_key=!OPENALPR_ALPR_KEY

# For an explanation of params, see http://doc.openalpr.com/api/?api=cloudapi
#openalpr_recognize_vehicle=1
#openalpr_country=us
#openalpr_state=ca
# openalpr returns percents, but we convert to between 0 and 1
#openalpr_min_confidence=0.3

# ----| If you are using openALPR command line |-----

openalpr_cmdline_binary=alpr

# Do an alpr -help to see options, plug them in here
# like say '-j -p ca -c US' etc.
# keep the -j because its JSON

# Note that alpr_pattern is honored
# For the rest, just stuff them in the cmd line options

openalpr_cmdline_params=-j -d
openalpr_cmdline_min_confidence=0.3


## Monitor specific settings


# Examples:
# Let's assume your monitor ID is 999
[monitor-999]
# my driveway
match_past_detections=no
wait=5
object_detection_pattern=(person)

# Advanced example - here we want anything except potted plant
# exclusion in regular expressions is not
# as straightforward as you may think, so
# follow this pattern
# object_detection_pattern = ^(?!object1|object2|objectN)
# the characters in front implement what is
# called a negative look ahead

# object_detection_pattern=^(?!potted plant|pottedplant|bench|broccoli)
#alpr_detection_pattern=^(.*x11)
#delete_after_analyze=no
#detection_pattern=.*
#import_zm_zones=yes

# polygon areas where object detection will be done.
# You can name them anything except the keywords defined in the optional
# params below. You can put as many polygons as you want per [monitor-<mid>]
# (see examples).

my_driveway=306,356 1003,341 1074,683 154,715

# You are now allowed to specify detection pattern per zone
# the format is <polygonname>_zone_detection_pattern=<regexp>
# So if your polygon is called my_driveway, its associated
# detection pattern will be my_driveway_zone_detection_pattern
# If none is specified, the value in object_detection_pattern
# will be used
# This also applies to ZM zones. Let's assume you have
# import_zm_zones=yes and let's suppose you have a zone in ZM
# called Front_Door. In that case, all you need to do is put in a
# front_door_zone_detection_pattern=(person|car) here
#
# NOTE: ZM Zones are converted to lowercase, and spaces are replaced
# with underscores@3

my_driveway_zone_detection_pattern=(person)
some_other_area=0,0 200,300 700,900
# use license plate recognition for my driveway
# see alpr section later for more data needed
resize=no
detection_sequence=object,alpr


[ml]
# When enabled, you can specify complex ML inferencing logic in ml_sequence
# Anything specified in ml_sequence will override any other ml attributes

# Also, when enabled, stream_sequence will override any other frame related
# attributes
use_sequence = yes

# if enabled, will not grab exclusive locks before running inferencing
# locking seems to cause issues on some unique file systems
disable_locks= no

# Chain of frames
# See https://zmeventnotification.readthedocs.io/en/latest/guides/hooks.html#und                                                                                        erstanding-detection-configuration
# Also see https://pyzm.readthedocs.io/en/latest/source/pyzm.html#pyzm.ml.detect                                                                                        _sequence.DetectSequence.detect_stream
# Very important: Make sure final ending brace is indented
stream_sequence = {
        'frame_strategy': 'most_models',
        'frame_set': 'snapshot,alarm',
        'contig_frames_before_error': 5,
        'max_attempts': 3,
        'sleep_between_attempts': 4,
                'resize':800

    }

# Chain of ML models to use
# See https://zmeventnotification.readthedocs.io/en/latest/guides/hooks.html#und                                                                                        erstanding-detection-configuration
# Also see https://pyzm.readthedocs.io/en/latest/source/pyzm.html#pyzm.ml.detect                                                                                        _sequence.DetectSequence
# Very important: Make sure final ending brace is indented
ml_sequence= {
                'general': {
                        'model_sequence': 'object,face,alpr',
            'disable_locks': '{{disable_locks}}',

                },
                'object': {
                        'general':{
                                'pattern':'{{object_detection_pattern}}',
                                'same_model_sequence_strategy': 'first' # also '                                                                                        most', 'most_unique's
                        },
                        'sequence': [{
                                #First run on TPU with higher confidence
                                'object_weights':'{{tpu_object_weights_mobiledet                                                                                        }}',
                                'object_labels': '{{tpu_object_labels}}',
                                'object_min_confidence': {{tpu_min_confidence}},
                                'object_framework':'{{tpu_object_framework}}',
                                'tpu_max_processes': {{tpu_max_processes}},
                                'tpu_max_lock_wait': {{tpu_max_lock_wait}},
                'max_detection_size':'{{max_detection_size}}'


                        },
                        {
                                # YoloV4 on GPU if TPU fails (because sequence s                                                                                        trategy is 'first')
                                'object_config':'{{yolo4_object_config}}',
                                'object_weights':'{{yolo4_object_weights}}',
                                'object_labels': '{{yolo4_object_labels}}',
                                'object_min_confidence': {{object_min_confidence                                                                                        }},
                                'object_framework':'{{yolo4_object_framework}}',
                                'object_processor': '{{yolo4_object_processor}}'                                                                                        ,
                                'gpu_max_processes': {{gpu_max_processes}},
                                'gpu_max_lock_wait': {{gpu_max_lock_wait}},
                                'cpu_max_processes': {{cpu_max_processes}},
                                'cpu_max_lock_wait': {{cpu_max_lock_wait}},
                'max_detection_size':'{{max_detection_size}}'

                        }]
                },
                'face': {
                        'general':{
                                'pattern': '{{face_detection_pattern}}',
                                'same_model_sequence_strategy': 'first'
                        },
                        'sequence': [{
                                'save_unknown_faces':'{{save_unknown_faces}}',
                                'save_unknown_faces_leeway_pixels':{{save_unknow                                                                                        n_faces_leeway_pixels}},
                                'face_detection_framework': '{{face_detection_fr                                                                                        amework}}',
                                'known_images_path': '{{known_images_path}}',
                                'unknown_images_path': '{{unknown_images_path}}'                                                                                        ,
                                'face_model': '{{face_model}}',
                                'face_train_model': '{{face_train_model}}',
                                'face_recog_dist_threshold': '{{face_recog_dist_                                                                                        threshold}}',
                                'face_num_jitters': '{{face_num_jitters}}',
                                'face_upsample_times':'{{face_upsample_times}}',
                                'gpu_max_processes': {{gpu_max_processes}},
                                'gpu_max_lock_wait': {{gpu_max_lock_wait}},
                                'cpu_max_processes': {{cpu_max_processes}},
                                'cpu_max_lock_wait': {{cpu_max_lock_wait}},
                                'max_size':800
                        }]
                },

                'alpr': {
                        'general':{
                                'same_model_sequence_strategy': 'first',
                                'pre_existing_labels':['car', 'motorbike', 'bus'                                                                                        , 'truck', 'boat'],
                                'pattern': '{{alpr_detection_pattern}}'

                        },
                        'sequence': [{
                                'alpr_api_type': '{{alpr_api_type}}',
                                'alpr_service': '{{alpr_service}}',
                                'alpr_key': '{{alpr_key}}',
                                'platrec_stats': '{{platerec_stats}}',
                                'platerec_min_dscore': {{platerec_min_dscore}},
                                'platerec_min_score': {{platerec_min_score}},
                                'max_size':1600
                        }]
                }
        }

root@ZoneMinder:~#
titof2375
Posts: 66
Joined: Sat Dec 12, 2020 2:32 pm

Re: I am lost in the configuration of zmeventnotification

Post by titof2375 »

root@ZoneMinder:~# cat /etc/zm/zmeventnotification.ini
# Configuration file for zmeventnotification.pl
[general]

secrets = /etc/zm/secrets.ini
base_data_path=/var/lib/zmeventnotification

# The ES now supports a means for a special kind of
# websocket connection which can dynamically control ES
# behaviour
# Default is no
use_escontrol_interface=no

# this is where all escontrol admin overrides
# will be stored.
escontrol_interface_file=/var/lib/zmeventnotification/misc/escontrol_interface.dat

# the password for accepting control interfaces
escontrol_interface_password=!ESCONTROL_INTERFACE_PASSWORD

# If you see the ES getting 'stuck' after several hours
# see https://rt.cpan.org/Public/Bug/Display.html?id=131058
# You can use restart_interval to have it automatically restart
# every X seconds. (Default is 7200 = 2 hours) Set to 0 to disable this.
# restart_interval = 432000
restart_interval = 0

# list of monitors which ES will ignore
# Note that there is an attribute later that does
# not process hooks for specific monitors. This one is different
# It can be used to completely skip ES processing for the
# monitors defined
# skip_monitors = 2,3,4

[network]
# Port for Websockets connection (default: 9000).
port = 9000

# Address for Websockets server (default: [::]).
# If you are facing connection issues or SSL issues, put in your IP here
# If you want to listen to multiple interfaces try 0.0.0.0

#address = 1.2.3.4

[auth]
# Check username/password against ZoneMinder database (default: yes).
enable = yes

# Authentication timeout, in seconds (default: 20).
timeout = 20

[push]
# This is to enable sending push notifications via any 3rd party service.
# Typically, if you enable this, you might want to turn off fcm
# Note that zmNinja will only receive notifications via FCM, but other 3rd
# party services have their own apps to get notifications
use_api_push = no

# This is the script that will send the notification
# Some sample scripts are provided, write your own
# Each script gets:
# arg1 - event ID
# arg2 - Monitor ID
# arg3 - Monitor Name
# arg4 - alarm cause
# arg5 - Type of event (event_start or event_end)
# arg6 (optional) - image path

api_push_script=/var/lib/zmeventnotification/bin/pushapi_pushover.py

[fcm]
# Use FCM for messaging (default: yes).
enable = yes

# Use the new FCM V1 protocol (recommended)
use_fcmv1 = yes

# if yes, will replace notifications with the latest one
# default: no
replace_push_messages = no

# Custom FCM API key. Uncomment if you are using
# your own API key (most people will not need to uncomment)
# api_key =

# Auth token store location (default: /var/lib/zmeventnotification/push/tokens.txt).
token_file = {{base_data_path}}/push/tokens.txt

# Date format to use when sending notification
# over push (FCM)
# See https://metacpan.org/pod/POSIX::strftime::GNU
# For example, a 24 hr format would be
#date_format = %H:%M, %d-%b

date_format = %I:%M %p, %d-%b

# Set priority for android push. Default is default.
# You can set it to default, min, low, high or max
# There is weird foo going on here. If you set it to high,
# and don't interact with push, users report after a while they
# get delayed by Google. I haven't quite figured out what is the precise
# value to put here to make sure it always reaches you. Also make sure
# you read the zmES faq on delayed push
fcm_android_priority = default

# Use MQTT for messaging (default: no)
[mqtt]
enable = yes
# Allow you to set a custom MQTT topic name
# default: zoneminder
#topic = my topic name

# MQTT server (default: 127.0.0.1)
server = 192.168.1.3

# Authenticate to MQTT server as user
username = MQTT

# Password
password = mqtt

# Set retain flag on MQTT messages (default: no)
retain = yes

# MQTT over TLS
# Location to MQTT broker CA certificate. Uncomment this line will enable MQTT over TLS.
# tls_ca = /config/certs/ca.pem

# To enable 2-ways TLS, add client certificate and private key
# Location to client certificate and private key
# tls_cert = /config/es-pub.pem
# tls_key = /config/es-key.pem

# To allow insecure TLS (disable peer verifier), (default: no)
# tls_insecure = yes



[ssl]
# Enable SSL (default: yes)
enable = yes

cert = /etc/zm/apache2/ssl/zoneminder.crt
key = /etc/zm/apache2/ssl/zoneminder.key

#cert = /etc/apache2/ssl/zoneminder.crt
#key = /etc/apache2/ssl/zoneminder.key

# Location to SSL cert (no default).
# cert = /etc/apache2/ssl/yourportal/zoneminder.crt

# Location to SSL key (no default).
# key = /etc/apache2/ssl/yourportal/zoneminder.key

[customize]
# Link to json file that has rules which can be customized
# es_rules=/etc/zm/es_rules.json

# Display messages to console (default: no).
# Note that you can keep this to no and just
# use --debug when running from CLI too
console_logs = yes
# debug level for ES messages. Default 4. Note that this is
# not controllable by ZM LOG_DEBUG_LEVEL as in Perl, ZM doesn't
# support debug levels
es_debug_level = 4

# Interval, in seconds, after which we will check for new events (default: 5).
event_check_interval = 5

# Interval, in seconds, to reload known monitors (default: 300).
monitor_reload_interval = 300

# Read monitor alarm cause (Requires ZoneMinder >= 1.31.2, default: no)
# Enabling this to 1 for lower versions of ZM will result in a crash
read_alarm_cause = yes

# Tag event IDs with the alarm (default: no).
tag_alarm_event_id = yes

# Use custom notification sound (default: no).
use_custom_notification_sound = no

# include picture in alarm (default: no).
include_picture = yes


# send event start notifications (default: yes)
# If no, starting notifications will not be sent out
send_event_start_notification = yes

# send event end notifications (default: no)
# Note that if you are using hooks for end notifications, they may change
# the final decision. This needs to be yes if you want end notifications with
# or without hooks
send_event_end_notification = yes

# URL to access the event image
# This URL can be anything you want
# What I've put here is a way to extract an image with the highest score given an eventID (even one that is recording)
# This requires the latest version of index.php which was merged on Oct 9, 2018 and may only work in ZM 1.32+
# https://github.com/ZoneMinder/zoneminde ... /index.php
# If you use this URL as I've specified below, keep the EVENTID phrase intact.
# The notification server will replace it with the correct eid of the alarm

# BESTMATCH should be used only if you are using bestmatch for FID in detect_wrapper.sh
# objdetect is ONLY available in ZM 1.33+
# objdetect_mp4 and objdetect_gif is ONLY available
# in ZM 1.35+
picture_url = !ZMES_PICTURE_URL
picture_portal_username=admin
picture_portal_password=admin

# This is a master on/off setting for hooks. If it is set to no
# hooks will not be used no matter what is set in the [hook] section
# This makes it easy for folks not using hooks to just turn this off
# default:no

use_hooks = yes

[hook]

# NOTE: This entire section is only valid if use_hooks is yes above

# Shell script name here to be called every time an alarm is detected
# the script will get passed $1=alarmEventID, $2=alarmMonitorId
# $3 monitor Name, $4 alarm cause
# script needs to return 0 to send alarm (default: none)
#

# This script is called when an event first starts. If the script returns "0"
# (success), then a notification is sent to channels specified in
# event_start_notify_on_hook_success. If the script returns "1" (fail)
# then a notification is sent to channels specified in
# event_start_notify_on_hook_fail
event_start_hook = '{{base_data_path}}/bin/zm_event_start.sh'

#This script is called after event_start_hook completes. You can do
# your housekeeping work here
#event_start_hook_notify_userscript = '{{base_data_path}}/contrib/example.py'


# This script is called when an event ends. If the script returns "0"
# (success), then a notification is sent to channels specified in
# event_end_notify_on_hook_success. If the script returns "1" (fail)
# then a notification is sent to channels specified in
# event_end_notify_on_hook_fail
event_end_hook = '{{base_data_path}}/bin/zm_event_end.sh'

#This script is called after event_end_hook completes. You can do
# your housekeeping work here
#event_end_hook_notify_userscript = '{{base_data_path}}/contrib/example.py'


# Possible channels = web,fcm,mqtt,api
# all is short for web,fcm,mqtt,api
# use none for no notifications, or comment out the attribute

# When an event starts and hook returns 0, send notification to all. Default: none
event_start_notify_on_hook_success = all

# When an event starts and hook returns 1, send notification only to desktop. Default: none
event_start_notify_on_hook_fail = none

# When an event ends and hook returns 0, send notification to fcm,web,api. Default: none
event_end_notify_on_hook_success = fcm,web,api

# When an event ends and hook returns 1, don't send notifications. Default: none
event_end_notify_on_hook_fail = none
#event_end_notify_on_hook_fail = web

# Since event_end and event_start are two different hooks, it is entirely possible
# that you can get an end notification but not a start notification. This can happen
# if your start script returns 1 but the end script returns 0, for example. To avoid
# this, set this to yes (default:yes)
event_end_notify_if_start_success = yes

# If yes, the text returned by the script
# overwrites the alarm header
# useful if your script is detecting people, for example
# and you want that to be shown in your notification (default:yes)
use_hook_description = yes

# If yes will will append an [a] for alarmed frame match
# [s] for snapshot match or [x] if not using bestmatch
# really only a debugging feature but useful to know
# where object detection is working or failing
keep_frame_match_type = yes

# list of monitors for which hooks will not run
# hook_skip_monitors = 2


# if enabled, will pass the right folder for the hook script
# to store the detected image, so it shows up in ZM console view too
# Requires ZM >=1.33. Don't enable this if you are running an older version

# Note: you also need to set write_image_to_zm=yes in objectconfig.ini
# default: no
hook_pass_image_path = yes


root@ZoneMinder:~#
tsp84
Posts: 227
Joined: Thu Dec 24, 2020 4:04 am

Re: I am lost in the configuration of zmeventnotification

Post by tsp84 »

If you need anymore help with getting push notifications to zmNinja just post again. Glad we got it at least working locally.
Magic919
Posts: 1381
Joined: Wed Sep 18, 2013 6:56 am

Re: I am lost in the configuration of zmeventnotification

Post by Magic919 »

The MQTT bit should be simple.

Well done tsp84.
-
titof2375
Posts: 66
Joined: Sat Dec 12, 2020 2:32 pm

Re: I am lost in the configuration of zmeventnotification

Post by titof2375 »

thanks to both of you @ tsp84 and @ Magic919, for your help.
for face detection, I don't need it because my cameras are not of such good quality, object detection suits me well except if there is a detection for dogs.
for the mqtt it works well, parcontre I saw that this camera was not top at night.
Now I have to find to put a hard drive more for the backup and to have access to the outside with my reverse proxy nginx
titof2375
Posts: 66
Joined: Sat Dec 12, 2020 2:32 pm

Re: I am lost in the configuration of zmeventnotification

Post by titof2375 »

the three princinpal files are therefore
--- zmeventnotifiction.pl
--- objectconfig.ini
--- znevenotification.ini
that's it ?
tsp84
Posts: 227
Joined: Thu Dec 24, 2020 4:04 am

Re: I am lost in the configuration of zmeventnotification

Post by tsp84 »

Yes those are the 3 main files to configure event server.

If you want different objects detected you can change this line in object config.ini

Code: Select all

object_detection_pattern=(person|car|motorbike|bus|truck|boat)

Code: Select all

object_detection_pattern=(person|car|motorbike|bus|truck|boat|cat|dog)
zmeventnotofication.pl is the Perl script that runs the event server.

The .Ini files are to configure the event server.
Last edited by tsp84 on Sat Mar 20, 2021 3:23 pm, edited 1 time in total.
tsp84
Posts: 227
Joined: Thu Dec 24, 2020 4:04 am

Re: I am lost in the configuration of zmeventnotification

Post by tsp84 »

Your reverse proxy stuff I saw looked ok. In order for the zmNinja app to work on your phone you need to open port 9000 on your firewall and forward it to 195.168.1.5

Port forwarding on firewall. Idk if nginx proxy manager - stream option would work for websockets. I am unfamiliar with nginx proxy manager.
Magic919
Posts: 1381
Joined: Wed Sep 18, 2013 6:56 am

Re: I am lost in the configuration of zmeventnotification

Post by Magic919 »

If he’s using MQTT he can possibly avoid that. I don’t expose 9000 on my set up and use Pushover to notify.
-
titof2375
Posts: 66
Joined: Sat Dec 12, 2020 2:32 pm

Re: I am lost in the configuration of zmeventnotification

Post by titof2375 »

Magic919 wrote: Sat Mar 20, 2021 3:59 pm If he’s using MQTT he can possibly avoid that. I don’t expose 9000 on my set up and use Pushover to notify.
I use telegram for notifications of my home automation
titof2375
Posts: 66
Joined: Sat Dec 12, 2020 2:32 pm

Re: I am lost in the configuration of zmeventnotification

Post by titof2375 »

hello, following a spit from my proxmox, my zoneminder is hs. I just reinstalled it and I end up with this error

Code: Select all

DBG-2:2021-05-10,10:57:50 PARENT: ---------->Tick END (active forks:1, total forks:1, active hooks: 0)<--------------
DBG-2:2021-05-10,10:57:50 |----> FORK:camera devant (1), eid:20 rules: Checking rules for alarm caused by eid:20, monitor:1, at: Mon May 10 10:57:50 2021 with cause:Motion All
DBG-1:2021-05-10,10:57:50 |----> FORK:camera devant (1), eid:20 rules: No rules found for Monitor, allowing:1
DBG-1:2021-05-10,10:57:50 |----> FORK:camera devant (1), eid:20 Matching alarm to connection rules...
DBG-1:2021-05-10,10:57:50 |----> FORK:camera devant (1), eid:20 Checking alarm conditions for MQTT 192.168.1.3
DBG-1:2021-05-10,10:57:50 |----> FORK:camera devant (1), eid:20 Monitor 1 event: last time not found, so should send
DBG-1:2021-05-10,10:57:50 |----> FORK:camera devant (1), eid:20 token is unique, shouldSendEventToConn returned true, so calling sendEvent
DBG-2:2021-05-10,10:57:50 |----> FORK:camera devant (1), eid:20 isAllowedChannel: got type:event_start resCode:1
INF:2021-05-10,10:57:50 |----> FORK:camera devant (1), eid:20 Not sending over MQTT as notify filters are on_success:all and on_fail:none
10/05/2021 10:57:50.893387 zmeventnotification[1326].INF [main:1022] [|----> FORK:camera devant (1), eid:20 Not sending over MQTT as notify filters are on_success:all and on_fail:none]

titof2375
Posts: 66
Joined: Sat Dec 12, 2020 2:32 pm

Re: I am lost in the configuration of zmeventnotification

Post by titof2375 »

I also have this error
'zmeventnotification.pl' exited abnormally, exit status 255
alabamatoy
Posts: 349
Joined: Sun Jun 05, 2016 2:53 pm

Re: I am lost in the configuration of zmeventnotification

Post by alabamatoy »

tsp84 wrote: Sat Mar 20, 2021 3:22 pmIn order for the zmNinja app to work on your phone you need to open port 9000 on your firewall and forward it to 195.168.1.5
I believe this is incorrect. I do not have port 9000 open to pass thru my FW, yet my ZMNinja works fine with ZM over 443. I have 2 different installations, both are setup same way (ZM on Ubuntu with Apache2 behind a separate router/FW). Only ports I have open (for ZM) is 443 and it works nicely with ZMNinja on both sites.
Post Reply