Optimising SecuritySpy’s AI Object Detection

The new AI-powered motion detection features in SecuritySpy version 5 use deep neural networks to detect the presence of humans and vehicles. This allows for highly-accurate triggering of recordings and notifications of just the events that you are interested in.

The AI algorithms output a prediction probability, indicating the likelihood for the presence of a human or vehicle, and you can choose the threshold at which this triggers recording and notifications. Generally, a threshold of around 85% gives good results.

However, the accuracy of the AI depends on many factors such as the distance to the subject, lighting, resolution and quality of the camera. You might find that a threshold of 85% is letting through too many false-positive triggers, or conversely is preventing real motion from generating a trigger.

To see how the AI is performing on your system, create a folder on your Desktop called “SS AI Predictions”. Then, whenever a video frame is passed through the AI, SecuritySpy will annotate the frame with the motion area and prediction probabilities, and will save it to this folder as an image file. Inspecting these images allows you to determine what the AI is “seeing”, and will therefore allow you to adjust your trigger thresholds for optimum results on your system.

Here are some examples of these annotated images (cropped to just the relevant area):

annotated1

annotated2

A high-quality setup with a high-resolution camera and good lighting is likely to result in highly accurate predictions that are close to 100%. In this case, you might like to increase the threshold value (perhaps to 90%) in order to to cut out more false-positive detections. Conversely, if the image quality is not so good, the AI predictions are likely to be less certain, and you probably want to reduce the trigger threshold to make sure that you don’t miss any real motion.

Note that in general, the AI algorithms work best with high-quality cameras. If you are using low-quality cameras, or your cameras are frequently operating in low lighting conditions and producing grainy/noisy images, it may be best to disable the AI and stick to standard motion detection.

Minimising CPU Usage

Initially, standard motion detection is performed on the incoming video stream to determine which frames to pass to the AI for further analysis. Because the AI can consume significant CPU resources, it is important to use suitable settings for this first-stage motion detection. Generally, this means using a sensitivity setting of 50-60% and a trigger time setting of at least 1 second.

prefs-cameras-triggers

Exactly how much CPU time is used depends on the speed of your Mac and whether it supports GPU-based hardware acceleration. Without hardware acceleration, analysing one video frame via the AI results in very high CPU usage for a very short period of time (e.g. 50-100ms).

Leave a Reply

Your email address will not be published. Required fields are marked *