ZMeventnotification - cannot convert float NaN to integer

Discussion topics related to mobile applications and ZoneMinder Event Server (including machine learning)
Post Reply
Posts: 26
Joined: Wed May 26, 2010 5:55 pm

ZMeventnotification - cannot convert float NaN to integer

Post by JonMoore » Mon Jun 14, 2021 3:27 pm


Anyone had any issues getting this error before. I've been doing some housekeeping and updating and have broken my notifications server, worked through a lot of it with the excellent docs (and my previous posts to remind me of changes I needed to make for running in a TrueNAS jail)

Turning on debug logs and checking ZMDetect.log has turned this sequence up;

Code: Select all

06/14/21 16:15:31 zmesdetect[61260] DBG1 [perf: Starting for frame:snapshot]
06/14/21 16:15:31 zmesdetect[61260] DBG1 [Sequence of detection types to execute: ['object', 'face']]
06/14/21 16:15:31 zmesdetect[61260] DBG1 [============ Frame: snapshot Running object detection type in sequence ==================]
06/14/21 16:15:31 zmesdetect[61260] DBG2 [Loading sequence: index:0]
06/14/21 16:15:31 zmesdetect[61260] DBG2 [Initializing model  type:object with options:{'object_config': '/usr/local/lib/zmeventnotification/models/yolov4/yolov4.cfg', 'object_weights': '/usr/local/lib/zmeventnotification/models/yolov4/yolov4.weights', 'object_labels': '/usr/local/lib/zmeventnotification/models/yolov4/coco.names', 'object_min_confidence': 0.3, 'object_framework': 'opencv', 'object_processor': 'cpu', 'disable_locks': 'yes'}]
06/14/21 16:15:31 zmesdetect[61260] DBG3 [object has a same_model_sequence strategy of first]
06/14/21 16:15:31 zmesdetect[61260] DBG3 [--------- Frame:snapshot Running variation: #1 -------------]
06/14/21 16:15:31 zmesdetect[61260] DBG1 [|--------- Loading "Yolo" model from disk -------------|]
06/14/21 16:15:31 zmesdetect[61260] DBG1 [perf: processor:cpu Yolo initialization (loading /usr/local/lib/zmeventnotification/models/yolov4/yolov4.weights model from disk) took: 5.17 ms]
06/14/21 16:15:31 zmesdetect[61260] DBG1 [Using CPU for detection]
06/14/21 16:15:31 zmesdetect[61260] DBG1 [|---------- YOLO (input image: 800w*600h, model resize dimensions: 416w*416h) ----------|]
06/14/21 16:15:32 zmesdetect[61260] DBG1 [perf: processor:cpu Yolo detection took: 1061.68 ms]
06/14/21 16:15:33 zmesdetect[61260] ERR [Error running model: cannot convert float NaN to integer]
06/14/21 16:15:33 zmesdetect[61260] DBG2 [Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/pyzm/ml/", line 683, in detect_stream
    _b,_l,_c,_m = m.detect(image=frame)
  File "/usr/local/lib/python3.8/site-packages/pyzm/ml/", line 58, in detect
    b,l,c,_model_names = self.model.detect(image)
  File "/usr/local/lib/python3.8/site-packages/pyzm/ml/", line 191, in detect
    center_x = int(detection[0] * Width)
ValueError: cannot convert float NaN to integer
seems like it's something to do with the image size being passed to the model, possibly some sort of variable type issue? I wondered if it was because the sizes have "w" and "h" with them?

Very grateful for any help!


Posts: 96
Joined: Thu Dec 24, 2020 4:04 am

Re: ZMeventnotification - cannot convert float NaN to integer

Post by tsp84 » Mon Jun 14, 2021 7:27 pm

Try running it with disable locks = no
From what I can see it looks like it's failing when it tries to get a portalock and I see you have disable locks = yes.

Er wait nvm, it does seem to be when it's resizing .

Posts: 26
Joined: Wed May 26, 2010 5:55 pm

Re: ZMeventnotification - cannot convert float NaN to integer

Post by JonMoore » Tue Jun 15, 2021 1:37 pm

I thought it might be something in the config so I've posted my objectconfig.ini below... although it feels like it might be something else to me. I've disabled resize to see if that helps but didn't make a difference.

Code: Select all

# Configuration file for object detection

# NOTE: ALL parameters here can be overriden
# on a per monitor basis if you want. Just
# duplicate it inside the correct [monitor-<num>] section

# You can create your own custom attributes in the [custom] section


# Please don't change this. It is used by the config upgrade script

# You can now limit the # of detection process
# per target processor. If not specified, default is 1
# Other detection processes will wait to acquire lock


# Time to wait in seconds per processor to be free, before
# erroring out. Default is 120 (2 mins)


# This is an optional file
# If specified, you can specify tokens with secret values in that file
# and onlt refer to the tokens in your main config file
secrets = /usr/local/etc/zoneminder/secrets.ini

# portal/user/password are needed if you plan on using ZM's legacy
# auth mechanism to get images

# api portal is needed if you plan to use tokens to get images
# requires ZM 1.33 or above

# if yes, last detection will be stored for monitors
# and bounding boxes that match, along with labels
# will be discarded for new detections. This may be helpful
# in getting rid of static objects that get detected
# due to some motion. 
# The max difference in area between the objects if match_past_detection is on
# can also be specified in px like 300px. Default is 5%. Basically, bounding boxes of the same
# object can slightly differ ever so slightly between detection. Contributor @neillbell put in this PR
# to calculate the difference in areas and based on his tests, 5% worked well. YMMV. Change it if needed.
# Note: You can specify label/object specific max_diff_areas as well. If present, they override this value
# example: 
# person_past_det_max_diff_area=5%
# car_past_det_max_diff_area=5000px

# this is the maximum size a detected object can have. You can specify it in px or % just like past_det_max_diff_area 
# This is pretty useful to eliminate bogus detection. In my case, depending on shadows and other lighting conditions, 
# I sometimes see "car" or "person" detected that covers most of my driveway view. That is practically impossible 
# and therefore I set mine to 70% because I know any valid detected objected cannot be larger than that area


# sequence of models to run for detection
# if all, then we will loop through all models
# if first then the first success will break out

# If you need basic auth to access ZM 

# base data path for various files the ES+OD needs
# we support in config variable substitution as well

# global settings for 
# bestmatch, alarm, snapshot OR a specific frame ID

# this is the to resize the image before analysis is done
# set to yes, if you want to remove images after analysis
# setting to yes is recommended to avoid filling up space
# keep to no while debugging/inspecting masks
# Note this does NOT delete debug images later

# If yes, will write an image called <filename>-bbox.jpg as well
# which contains the bounding boxes. This has NO relation to 
# write_image_to_zm 
# Typically, if you enable delete_after_analyze you may
# also want to set  write_debug_image to no. 

# if yes, will write an image with bounding boxes
# this needs to be yes to be able to write a bounding box
# image to ZoneMinder that is visible from its console

# Adds percentage to detections
# hog/face shows 100% always

# color to be used to draw the polygons you specified

# This section gives you an option to get brief animations 
# of the event, delivered as part of the push notification to mobile devices
# Animations are created only if an object is detected
# NOTE: This will DELAY the time taken to send you push notifications
# It will try to first creat the animation, which may take upto a minute
# depending on how soon it gets access to frames. See notes below


# If yes, object detection will attempt to create 
# a short GIF file around the object detection frame
# that can be sent via push notifications for instant playback
# Note this required additional software support. Default:no

# Format of animation burst
# valid options are "mp4", "gif", "mp4,gif"
# Note that gifs will be of a shorter duration
# as they take up much more disk space than mp4

# default width of animation image. Be cautious when you increase this
# most mobile platforms give a very brief amount of time (in seconds) 
# to download the image.
# Given your ZM instance will be serving the image, it will anyway be slow
# Making the total animation size bigger resulted in the notification not 
# getting an image at all (timed out)

# When an event is detected, ZM it writes frames a little late
# On top of that, it looks like with caching enabled, the API layer doesn't
# get access to DB records for much longer (around 30 seconds), at least on my 
# system. animation_retry_sleep refers to how long to wait before trying to grab
# frame information if it failed. animation_max_tries defines how many times it 
# will try and retrieve frames before it gives up

# if animation_types is gif then when can generate a fast preview gif
# every second frame is skipped and the frame rate doubled
# to give quick preview, Default (no)

# You can now run the machine learning code on a different server
# This frees up your ZM server for other things
# To do this, you need to setup
# on your desired server and confiure it with a user. See its instructions
# once set up, you can choose to do object/face recognition via that 
# external serer

# URL that will be used
# API/password for remote gateway

# config for object

# If you are using legacy format (use_sequence=no) then these parameters will 
# be used during ML inferencing

# If you are using the new ml_sequence format (use_sequence=yes) then 
# you can fiddle with these parameters and look at ml_sequence later
# Note that these can be named anything. You can add custom variables, ad-infinitum

# Google Coral
# The mobiledet model came out in Nov 2020 and is supposed to be faster and more accurate but YMMV

# Yolo v4 on GPU (falls back to CPU if no GPU)

# Yolo v3 on GPU (falls back to CPU if no GPU)

# Tiny Yolo V4 on GPU (falls back to CPU if no GPU)


# read
# read
# and play around

# quick overview: 
# num_jitters is how many times to distort images 
# upsample_times is how many times to upsample input images (for small faces, for example)
# model can be hog or cnn. cnn may be more accurate, but I haven't found it to be 


# This is maximum distance of the face under test to the closest matched
# face cluster. The larger this distance, larger the chances of misclassification.
# When we are first training the face recognition model with known faces,
# by default we use hog because we assume you will supply well lit, front facing faces
# However, if you are planning to train with profile photos or hard to see faces, you
# may want to change this to cnn. Note that this increases training time, but training only
# happens once, unless you retrain again by removing the training model
#if a face doesn't match known names, we will detect it as 'unknown face'
# you can change that to something that suits your personality better ;-)

# Many of the ALPR providers offer both a cloud version
# and local SDK version. Sometimes local SDK format differs from
# the cloud instance. Set this to local or cloud. Default cloud

# -----| If you are using plate recognizer | ------

# If you want to host a local SDK
# Plate recog replace with your api key
# if yes, then it will log usage statistics of the ALPR service
# If you want to specify regions. See
# minimal confidence for actually detecting a plate
# minimal confidence for the translated text

# ----| If you are using openALPR |-----

# For an explanation of params, see
# openalpr returns percents, but we convert to between 0 and 1

# ----| If you are using openALPR command line |-----


# Do an alpr -help to see options, plug them in here
# like say '-j -p ca -c US' etc.
# keep the -j because its JSON

# Note that alpr_pattern is honored
# For the rest, just stuff them in the cmd line options

openalpr_cmdline_params=-j -d

## Monitor specific settings

# Examples:
# Let's assume your monitor ID is 999
# my driveway

# Advanced example - here we want anything except potted plant
# exclusion in regular expressions is not
# as straightforward as you may think, so 
# follow this pattern
# object_detection_pattern = ^(?!object1|object2|objectN)
# the characters in front implement what is 
# called a negative look ahead

# object_detection_pattern=^(?!potted plant|pottedplant|bench|broccoli)

# polygon areas where object detection will be done.
# You can name them anything except the keywords defined in the optional
# params below. You can put as many polygons as you want per [monitor-<mid>]
# (see examples).

#my_driveway=306,356 1003,341 1074,683 154,715

# You are now allowed to specify detection pattern per zone
# the format is <polygonname>_zone_detection_pattern=<regexp>
# So if your polygon is called my_driveway, its associated
# detection pattern will be my_driveway_zone_detection_pattern
# If none is specified, the value in object_detection_pattern 
# will be used
# This also applies to ZM zones. Let's assume you have 
# import_zm_zones=yes and let's suppose you have a zone in ZM
# called Front_Door. In that case, all you need to do is put in a 
# front_door_zone_detection_pattern=(person|car) here
# NOTE: ZM Zones are converted to lowercase, and spaces are replaced
# with underscores@3

#some_other_area=0,0 200,300 700,900
# use license plate recognition for my driveway
# see alpr section later for more data needed

# When enabled, you can specify complex ML inferencing logic in ml_sequence
# Anything specified in ml_sequence will override any other ml attributes

# Also, when enabled, stream_sequence will override any other frame related
# attributes 
use_sequence = yes

# if enabled, will not grab exclusive locks before running inferencing
# locking seems to cause issues on some unique file systems
disable_locks= yes

# Chain of frames 
# See
# Also see
# Very important: Make sure final ending brace is indented 
stream_sequence = {
		'frame_strategy': 'most_models',
		'frame_set': 'snapshot,alarm',
		'contig_frames_before_error': 5,
		'max_attempts': 3,
		'sleep_between_attempts': 4,


# Chain of ML models to use
# See
# Also see
# Very important: Make sure final ending brace is indented 
ml_sequence= {
		'general': {
			'model_sequence': 'object,face',
			'disable_locks': 'yes',
			'match_past_detections': 'yes',
			'past_det_max_diff_area': '5%',
			#'car_past_det_max_diff_area': '10%',
			#'ignore_past_detection_labels': ['dog', 'cat']

		'object': {
				'same_model_sequence_strategy': 'first' # also 'most', 'most_unique's
			'sequence': [{
                                # YoloV4 on GPU if TPU fails (because sequence strategy is 'first')
                                'object_labels': '{{base_data_path}}/models/yolov4/coco.names',				# YoloV4 on GPU if TPU fails (because sequence strategy is 'first')
				'object_min_confidence': 0.3,
                                'object_processor': 'cpu'

		'face': {
				'pattern': '.*',				
				'same_model_sequence_strategy': 'union' # combines all outputs of this sequence
			'sequence': [
				'name': 'DLIB based face recognition',
				'enabled': 'yes',
				#'pre_existing_labels': ['face'], # If you use TPU detection first, we can run this ONLY if TPU detects a face first
				'unknown_images_path': '{{base_data_path}}/unknown_faces',
                                'face_detection_framework': 'dlib',
                                'known_images_path': '{{base_data_path}}/known_faces',
                                'face_model': 'cnn',
                                'face_train_model': 'cnn',
                                'face_recog_dist_threshold': 0.6,
                                'face_num_jitters': 1,
                                'face_upsample_times': 1

Posts: 96
Joined: Thu Dec 24, 2020 4:04 am

Re: ZMeventnotification - cannot convert float NaN to integer

Post by tsp84 » Tue Jun 15, 2021 10:07 pm

Once I get home I'll sit down and try and help you out

User avatar
Posts: 1511
Joined: Sun Mar 01, 2015 12:12 pm

Re: ZMeventnotification - cannot convert float NaN to integer

Post by asker » Wed Jun 16, 2021 10:05 am

Code: Select all

    _b,_l,_c,_m = m.detect(image=frame)
  File "/usr/local/lib/python3.8/site-packages/pyzm/ml/", line 58, in detect
    b,l,c,_model_names = self.model.detect(image)
  File "/usr/local/lib/python3.8/site-packages/pyzm/ml/", line 191, in detect
    center_x = int(detection[0] * Width)
ValueError: cannot convert float NaN to integer
Two things could be happening:
a) Width is invalid. This is unlikely, because your logs show the image was passed correctly

Code: Select all

06/14/21 16:15:31 zmesdetect[61260] DBG1 [|---------- YOLO (input image: 800w*600h, model resize dimensions: 416w*416h) ----------|]
and your code is not triggering any resize, if it did, resize would show in the logs. So it is not resize related either.

b) Your detections are getting messed up. That is, detection[0] doesn't exist. There could be several reasons:
b.1) Your config and model files did not download properly for yolov4 (possible, I've seen it some times)
b.2) You have disabed_locks=true, which can mess things up (more likely)

Enable the locks and try.
If that doesn't work, re-download the yolo models and config files
Please don't ask me questions via PM. Please post in these forums or GitHub.

Please read before posting:
How to set up logging properly
How to troubleshoot and report - ES
How to troubleshoot and report - zmNinja
ES docs
zmNinja docs

Posts: 26
Joined: Wed May 26, 2010 5:55 pm

Re: ZMeventnotification - cannot convert float NaN to integer

Post by JonMoore » Thu Jun 17, 2021 11:21 am

ah thanks for the help!

Tried setting the disable locks to no and it didn't make any difference.

Deleted the models folder and re-ran the to re-download them and everything just worked so must have been a corrupt model download.
Thanks so much

Post Reply

Who is online

Users browsing this forum: No registered users and 6 guests