Request for a feature to allow testing of "triggers" on already captured video....
So I've long (since first trying SecuritySpy) wanted the ability to see how any changes to trigger settings, especially various of the "Advanced Trigger Options", might affect things based on previously recorded video, but so far as I know there is not currently any way to do this.
For example I have a recent incident where normally motion detection from a human and bicycle in the frame have created a trigger in the past, but in this particular case there was no trigger. I also have lots of incidents where there were too many nuisance triggers created by things like rain or snow and/or wind, or even lawn sprinklers.
Indeed it seems to be a common question here on the forums about how/why various settings either miss triggers or cause too many triggers, and so I think it would be highly valuable to have a feature in SecuritySpy to allow "testing" of trigger settings on already captured video. It's nearly impossible to recreate some scenarios for "live" testing, so reviewing already recorded video is the only, or at least the only obvious, option. Also it would be highly valuable to be able to replay previous trigger incidents to be sure that any changes to settings will still create the same triggers.
Maybe it could be as simple as having a checkbox or sticky setting in the "Browser" menu that would enable such testing whenever video is played.
By "testing" I mean that triggers are detected and reported in some manner (including seeing the red bounding box), but no email, uploads, etc. actions are performed -- just notifications and the bell maybe, and/or some popup window that logs what is detected and what actions would have been performed.
(BTW, I find I the Browser window to often have unreliable playback -- it gets stuck and I often have to close and reopen it and find my place again.)
Comments
-
I understand the utility of what you are proposing, however I think it would be an unwieldily feature to implement and use. It would require continuous recording for some period of time (so that you are able to re-test missed events), which would use a lot of disk space – perhaps interfering with other recording – and require its own separate auto-deletion function. The user interface would be complex and difficult to implement. So while I appreciate what you are getting at here, I don't think that such a function is practical.
We will continue to put a lot of effort into improving the accuracy of the existing motion detection, since this is such a vital feature of the software, and so you can expect both fewer missed events and fewer false detections as time goes on. You may have noticed the recent AI Review window, which goes some way to providing the kind of review you are looking for (specifically for the AI object detection rather than for the motion detection). In general we will also be working on simplifying motion detection configuration, with default settings that work better for a wider variety of environments and lighting conditions, so fine user-calibration of the settings will become less important.
-
I had requested something like this back in 2021. My use case was setting my trigger so I can reliably notify when the mail person is in front of my mailbox. It takes too long with trial and error to try to get it right when the mail person only comes at most once a day, while other cars drive by all the time.
-
I already continuously record all video from all of my cameras (in two places actually -- an older NVR, and with SecuritySpy).
In SecuritySpy each 8k camera I have records about 180-190 GB per day. On this system two 8k cameras are using about 4.2 TB of data for 8 days of recordings. That's nothing -- especially not when one can buy a 24 TB drive for a few hundred dollars, and 122TB SSDs are already available (though obviously not so cheaply).
In any case, the idea would be that IFF one has a recording of some interesting event, then one should be able to replay that through the detection algorithms to test new settings.
Not being able to review what will happen when changing settings makes changing settings a hit-and-miss shot-in-the-dark scary thing to do.
Trying to re-create an event for testing is pointless -- it is impossible to ever get exactly the same conditions with lighting etc., and also exactly reproduce the exact same movements, etc.
Testing new settings only makes sense if you can do it against the exact same recording over and over again.
Implementation would seem to be fairly simple to me -- just add another default always-available virtual video source, which would be the browser playback window, just as if it were another camera, and allow settings, actions, etc. to be adjusted on it exactly the same as if it is a camera. No new UI there.
The only new UI needed might be one button in the browser playback window to turn the feed from it to the new virtual source on and off again.
So one would just turn on "test mode" and play some video in the recordings browser window and see what happens. Turning on test mode could maybe copy the current settings from the camera that recorded the video being tested (and/or see below). Rewind and adjust the settings on the virtual playback camera and play again until you're happy. Finally copy the new settings to the real camera which recorded the video you're testing with (and possibly other cameras too, if that makes sense), and turn off test mode again.
Maybe another new pair of buttons/menu-entries could copy all the settings from one video source (real camera or virtual playback), and allow them to be "pasted" to another, but that would be icing on the cake (and such a feature would also make setting up new cameras easier). Maybe that's just "command-C" and then "command-V" after selecting a camera/source in the settings window.
So long as there are settings to adjust, no amount of better accuracy, better default settings, or a smarter AI, etc. will solve the problem of how to adjust those settings for specific situations. That can only be done by either having someone go walk/dance/drive in front of a live camera, or with playback of a recorded video of such an event; and it can only be done accurately with the latter.
-
I also continuously record 24/7 on all my cams, so I can see the utility of this feature.
Ben, if you did implement a feature like this, and yes, I realize it would be complex (perhaps better suited as a separate review app), wouldn't the big bonus for you be that you could also leverage the users feedback as training data? ie, scrub through a video's motion events, flag event #1 as false positive, event #2 as false positive, event #3 as positive, etc, and then retrieve relevant frames (with user consent of course) and the training metadata (the user applied flag) to improve your model?
-
Hmmnnn....
This may be doable by setting up VLC to stream a video file via RTSP and then adding the VLC server machine as a camera in SecuritySpy.
IIRC VLC can stream a file via RTSP, but I have never played around with the video server aspect of VLC.

