Upload to S3 bucket
  • Hi Ben - have you ever considered extending the upload function to include support for Amazon S3 buckets?
    I have done script to take images from a folder periodically up to S3 but it would be nice to have it baked into the app.

    (I use Amazon SES for my emails as they give me free 200 a day emails - plenty for my and my parents installations)
  • Thanks for the suggestion. I think this would be a useful addition, I'll see if we can add it in the future.
  • New here. Really like SecuritySpy in the 48 hours I've played with it. Just adding my voice to desire for S3 integration. There's another thread on it here: https://www.bensoftware.com/forum/discussion/766/s3-server-upload/p1

  • I definitely need this feature too!
  • I'd say if you opted to support S3, you'd most likely get requested to support others like Google Drive, Azure, Dropbox, OneDrive, Blackblaze etc

    You could use something like s3sync?

    I've got google drive setup that syncs any motion detection recordings straight away.
  • OK, by popular demand, we have now added S3 support in the latest beta version of SecuritySpy (currently 5.1.1b13).

    Could you all please test this and report back? Thanks.
  • Thanks Ben!
  • Thanks for doing this.

    1. I first created a bucket in us-east-1 (Since the UI in SecuritySpy prompts for a bucket name)
    2. I created an IAM user with PutObject permissions for that bucket
    3. I entereted the bucket name and access key/secret key and pressed test.

    1. It began to install Mac development tools (but not on the screen foreground). This is fine.
    2. It displays an error saying "Error 1580,88799 make_bucket failed: s3://bucketname An error occurred (AccessDenied) when calling the CreateBucket operation: Access Denied"

    I checked the ~/.aws/config and saw the default region was us-east-2. Changed that to us-east-1 (where my bucket exists) and "test" still shows the same error.

    Granted my user CreateBucket and appened a "1" to my bucketname. SecuritySpy created the bucket successfully and the 'test' passed and there is a dummyfile in my bucket. I then dropped the 1 (so the bucket name of the existing bucket I created in the console is there) and the test also passed. So it looks like the test function needs CreateBucket even if it doesn't acutally create a bucket (I assume it tries to, and passes if the bucket already exists; but doesn't trap for the bucket existing (which would have also failed since I did not grant the IAM use ListAllMyBuckets) before trying to create it). This is more permissions than I'd normally offer.

    After setting this up, my motion files uploaded to S3 as expected.
  • Thanks for the feedback. Based on this, I've made the following tweaks:

    SecuritySpy will now attempt the upload first, before attempting to create the bucket. So if the bucket already exists, this should now succeed without requiring the CreateBucket permission for the user.

    Only if the upload fails with the "NoSuchBucket" message will SecuritySpy then attempt to create the bucket.

    I've also made the default region selection much more accurate, as it's now based on the Mac's latitude and longitude (previously it was based on the Mac's time zone, which isn't very accurate). SecuritySpy creates the config file when the AWS CLI tool is installed initially, or if the config file does not exist when an upload is initiated. So, if you delete the config file and attempt an upload, SecuritySpy will recreate the config file with the closest region - it would be interesting to see what it now chooses.

    However it does seem that the region in the config file does not need to match the region of the bucket, for the uploads to work, so I don't think this is too critical.

    This is all in a new beta (5.1.1b14), so if you can re-test and report back, that would be great.
  • Here you go:

    Here's what I did:

    1. With the Nov 1 beta, I deleted my existing S3 connection.
    2. Deleted ~/.aws/config and credentials
    3. In AWS IAM, rolled back the permissions so my IAM user only has PutObject to that existing bucket.
    4. Deleted all contents from that bucket.
    5. Installed Nov 2nd beta
    6. Reconfigured S3 and Test passed with the exiting bucket (in us-east-1) and just PutObjects permissions.

    It re-created my config file and used us-east-2 even though my Mac is 30 miles from us-east-1. I have location services enabled but SecuritySpy does not show as an allowed app for Location Services under Security & Privacy.

    7. I granted the AWS IAM user create bucket permissions and it successfully created a new bucket (by pressing the test button) in us-east-2.

    Tested saving 10 second snapshots with and without folder name and date prefixes and both work great. I don't think the region issue is a deal breaker since it can now work with existing buckets and low permissions.

    Great work with the quick turnaround.
  • Great to hear it's working well!

    There are ways to get a pretty good location without using Location Services - specifically the "closest city" location as set in the Date & Time system preference. This gives a pretty accurate location as long as you are in the vicinity of a major city (which most people are).

    As for the locations of the enpoints, currently I'm just taking the latitude/longitude of the centre of the state containing the enpoint, specified on the AWS Service Endpoints page, so this could be where the inaccuracy lies (Virginia for us-east-1 and Ohio for us-east-2). As far as I can tell, Amazon don't publish exactly where their datacentres are - if I had this information I could make the automatic location more accurate. Do you know if this information has been made public?
  • Ah. Makes sense. I just checked and my city was New York. I've since changed that. I think the method you are using is fine -- and end-users can easily change it as well.

    In the US, the regions just go by state names in the AWS console. That is probably fine for the logic you are using.
  • Nice addition! My only concern is feature bloat with lots of cloud providers but if done nice way e.g. modular code etc I guess its not a big issue.

    Even though I won't be using S3 with SS, I'd recommend allowing users to pick the region mainly because of varying region costs.

    One alternative would allow for shell script to be run after a file is written e.g. after motion etc, it should be easy to then pass this file location to any cloud script to upload it.

    I think these file names are available via the event stream but would make it bit easier as shell command.
  • Thanks for the feedback. I'm also concerned with feature bloat/creep, but S3 is a big player and this has been requested many times, so we considered it useful enough to add to SecuritySpy.

    I'm also concerned with UI clutter, so I'm reluctant to add an option to select the region just for S3. I think that the closest region (which will automatically be selected) will work best for most users, and if not then it's easy to edit the config file to change this. Also, it seems to be the case that once you create the bucket in a particular region, it exists (and is charged) in that region forevermore, and the upload works even if the region in the local config file doesn't match.

    Yes we do already have the option to run a script for every file that is captured - see the ProcessCapturedFile script at the bottom of the SecuritySpy AppleScript Examples page.
  • I'm trying to come up to speed on S3... I'd like to ask the group a couple of broad questions, if I may. My usage would be around 1 TB max.

    Is the performance, upload and download rates, faster than Dropbox? Are there other considerations that would favor using S3 over Dropbox?

    Many thanks in advance for any offered insight!
  • Not a drop box SME. But it would be hard to beat S3 speeds. This of course depends on where you are located and latency to S3 for your region.

    S3 has a cost. If you want to keep your files more than 30 days and you are mostly concerned about archival but not viewing/distributing from S3 except if your on prem server goes down, you can set a lifecycle policy to move content to Infrequent Access and pay half the storage cost. Minimum 30 day charge though.
  • Thanx for your insight, Ramias - appreciate it!
  • Is the use case here to move existing video off of your primary storage and onto S3 on a recurring basis, or is the idea to save your video to S3 in real-time to remove the need for local storage?
  • I have plenty of local storage. My use case is secure offsite access for motion recordings, if local storage were corrupted, stolen, destroyed etc.

    I go to S3 standard and lifecycle off after a week.
  • This description by @Ramias is exactly what this feature is designed for: offsite backup of the footage, in case the local storage is compromised (damaged/stolen/corrupted etc.). S3 is perfect for this because it's fast, inexpensive, and has built-in options to automatically remove old files.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!