Skip to content

AI to detect snow/rain/bugs

edited January 2020 in SecuritySpy
The AI in SecuritySpy seems to be aimed at triggering for things we *want* to capture (i.e. humans and vehicles) -- I would find it VERY beneficial to have an AI classifier for snow/rain/bugs which squelches the motion capture -- i.e. I want to capture everything BUT what this classifier finds. I want to see animals and perhaps a garbage can blowing across the street, but I don't want 300 captured videos of spider butts or driving snow.

Comments

  • Unfortunately I don't think this would work very well. It would be possible to train a classifier, say, to detect snow - that's no problem. Then, you're saying that you want to disable recording when the classifier detects snow, but the problem is that if a human walks past the camera when it's snowing then you wouldn't get a recording - you'd miss this important event. Basically, whenever it's snowing, raining or there are bugs flying around, you would miss any important event that is happening at the same time.

    That's why this needs to work the other way round, as it does currently, with classifiers detecting the things you actually want to record such as humans and vehicles (P.S. we may add an animal classifier in the future, but this is more difficult than a human or vehicle classifier for various reasons, and we don't want to implement this if it isn't going to work well).
  • edited February 2020
    I'm not sure why it wouldn't work well.

    The car/person/animal classifier says "hey I think this is a car/person/animal" and then the rain/bug classifier can overrule it saying "nah, it's not" -- basically it'd add a second level to the trigger. The first level is for detecting things you want, and the second would provide a way to overrule the classifier.

    Trying to do it all at once wouldn't work as well, I don't think. You can get *great* classifiers but trying to get them to avoid the worst offenders (rain/snow and flies) would be a much harder challenge. By splitting the workload up you could potentially get to a better result faster.

    Ideally having a way to train your own classifiers (an advanced feature that could be easily disabled or reset) would help a lot, and we could even provide the datasets back to you to help make the default classifiers better.
  • The problem is if you have, for example, a human in the rain. In your scheme, the human classifier would return a positive result, but so would the rain classifier. The rain classifier doesn't tell you that a human isn't present - it just tells you that rain is present. So I don't see how it can overrule the human classifier without getting false-negative results (i.e. missing things you want to capture).
  • Right, I understand. With your new MOTION event in the beta perhaps I can help figure out why SecuritySpy thinks there are cars in the snowstorm, and then in a few months do the same with the bugs. :-)
  • This blog post may help too: Optimising SecuritySpy’s AI Object Detection. The fundamental issue is that nothing is 100% accurate - in our testing, the accuracy of our AI is around 95%, which is just about as good as these things get. This cuts out the vast majority of false-positive detections, but if you have lots of movement from rain/snow/bugs - say hundreds of events like this per day - then some of them are inevitably going to be mistakenly identified.
  • I have gathered some 640x480 feeds of the AI failing rather spectacularly (no car or human in the scene, just the wiggling spider web) -- would these videos be at all useful to you in helping to train the classifier better?

    I mean a parked car sitting in the driveway with spider webs over the camera ... I get why the classifier would see a motion event with a car in it. These videos though ... there's no car or anything resembling a car in the scene.

    I'm also a little curious about the parked car motion detection... I mean to a human brain it's easy to see that yes there is motion and there *is* a car, but the car is simply not moving from frame to frame, it's unrelated motion, or the motion is not a car moving.

    One bug while I'm on this: When I create the ~/SecuritySpy/AI Predictions folder it does immediately get filled with annotated JPEGs which is *fantastic* -- however if the motion is on the right edge of the screen, the text is run off the end of the picture, so I can't see what the vehicle classifier is saying. Also, the black text is not visible since it's a dark scene and the red bounding box is small. (black on red works fine, but then the rest of the text is no longer on a red bg). Any chance the text could be wrapped, shifting "up" and/or "left" so it doesn't obscure the image in the bounding box or get run off the top or right of the image?
  • Hi @aandrew If you can email us a couple of these videos it may help us understand more about what is going wrong. If these are 640x480 feeds at night, then it might be a quality issue here. The better the quality and lighting of the scene, the higher the AI detection accuracy is going to be. And, as we have specified in the Achieving Effective Motion Detection section of the SecuritySpy User Manual, we recommend at least 2 MP resolution.

    The AI doesn't "see" things as a human does - yes a moving car is an easy thing for a human to detect but it's much more difficult for an AI!

    Good point about the image notation, I will see what we can do about this for a future update.
  • for sure, I'll send some of these videos out, and I do completely understand that AI needs pixels to do its job. The issue is that you don't want a lot of pixels for motion detection, but the AI runs on the same feed as the motion detection. If there were a way to tell the AI to look at a decoded frame from the HD stream when the low-quality stream detects motion that could go quite a way to getting the best of both worlds, but understand that this isn't a simple thing. The HD stream would have to be "rewound" to find the most recent I-frame, then "played forward" to get to the frame of the desired timestamp for starters. Past that, there are also probably some synchronization issues to deal with different frame rates and stream starting points, although those are probably minor compared to the first issues.
  • Hi @aandrew, the motion detection algorithm works on a fixed number of pixels across the image, no matter what the resolution of the incoming video is. So it works equally well, with similar CPU usage, with any video resolution. So there would be no advantage to pulling two separate streams for this (and has some key disadvantages, which you correctly outline).

    The main factors for a good outcome from the MD algorithm are low noise and good lighting. And the main factors for a good outcome from the AI are low noise, good lighting, correct camera angle/view, correct camera focus, and sufficient resolution (at least 2 MP).
  • "the motion detection algorithm works on a fixed number of pixels across the image, no matter what the resolution of the incoming video is"

    Is this a relatively new change? It may have been before the AI, but I started using the low resolution stream because the CPU use was absolutely pegged when doing motion detection on the 1920x1080 stream.
  • Hi @aandrew, it's not a new change; it's always been like this. The CPU usage for the higher-resolution stream will be due to the decoding of the stream (which is required to do things like motion detection, display to the screen, sending images via the web server etc.), not the actual motion detection itself, which uses very little CPU.
Sign In or Register to comment.