Playing recorded video through AI motion detection to set preferences?
  • Have you thought of this?

    I'm trying to figure out the best settings for masking and threshold to have the vehicle AI trigger detect the US Postal Truck stopping by the street mailbox in front of my house to deliver mail.

    Since the mail truck only comes by once a day, it is going to be rather tedious to try different settings and see which works best.

    Also, the truck comes into view when it stops at the neighbor's house first. There are scenario's where I might want to trigger when it stops there instead of waiting until it is right in front of my house (with the right alert, I would have enough time to grab a letter or package and hand it to the mail person when they reach my house a few minutes later)

    Yeah, "first world" problem, but bear with me...

    I record video 24x7 for all my cameras anyway. It would be really convenient if there was a SecuritySpy option or even a separate standalone utility program, where I could have it play back existing footage and see how/if the motion and/or AI triggers are activated.

    I could change settings and then re-play the footage. Basically iterating quickly over multiple "what if" settings to see which gives the desired results.
  • Seconded!

  • We are planing something like this, but not exactly for the purposes you describe. We are imagining some kind of search feature, where you could search for things like "all vehicles in camera X between time Y and Z" and you would get back a list of images/video clips.

    In terms of setting the optimum thresholds, please see this blog post as I think it will help you: Optimising SecuritySpy’s AI Object Detection
  • I think you are missing the point. Searching is a completely different (but obviously useful) feature.

    I need to be able to run recorded camera feeds through SS and see how triggers are activated or not when I change only 1 thing.

    For example, I will change the mask area and then re-run the feed to see if a smaller or larger mask area triggers on what I want without false positives.

    Then, I can run and re-run the feed to see what threshold levels give the tightest trigger without missing the important event or too many false positives if I loosen the threshold.

    That can't be done with searching and If change 1 parameter per day, I would need many months of tedious experimentation to hone in on the settings that work for me.

    The blog post has good general suggestions but that is only a very rough starting point.

    I need a "tight feedback loop" to adjust settings, see the result (trigger or no trigger), and repeat.

    Please note the power of this: By providing this capability, we can fine tune your "black box" AI/ML algorithms to achieve the result we want in our specific use case (triggering or not triggering on a specific action or object in the camera view).

    Without this, we are forever tweaking "knobs and dials" or complaining to you about the algorithm which we have no control over and you can't easily change to accommodate specific user's requirements because of training expense and improving response for one user might degrade response for other users.

    In thinking a bit more, the only way I see a search feature maybe helping is if I can define a search, store it, schedule it to run automatically at certain times, and have an action triggered by the search result.

    That is, I would have a search "look for US Postal Truck", schedule it to run daily between 10am and 12pm (the time window for my delivery) and have a trigger "If found activate this actions..."

    Although that would be awesome, I have a hunch that isn't your "MVP" for the first implementation of search.
  • I understand what you mean, and it's a good idea. But I think a similar thing (most of the utility gained from what you are asking for) can be implemented in a much simpler way. What if SecuritySpy were to remember a selection of images for the previous day - for example, 20 images that range from 0-100% probability that the AI thinks is a human. Then you can simply look along this spectrum and see where the trigger threshold should be.

    This won't help you with the mask, but it should be mostly obvious where the mask should be and where it shouldn't be. Basically you want to mask out any areas where there might be motion that you want to ignore (e.g. a tree with branches that sway in the wind). You shouldn't mask off areas that aren't subject to such extraneous motion, as the more pixels available for the motion detector to use the better.

    What do you think?
  • Yes - that would definitely help.

    I don't want to try and design the fix, as there could be many different ways to a solution, just wanted to make sure you are clear on the challenge I am having making use of the AI.

    For me, the mask is tricky because if I make it too large, it triggers on motion that I don't want, and if I make it too small, there isn't enough image provided for the detection algorithms.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!