Delete Old Files Threshold - Lesson Learned.
The past several months, I kept running into errors regarding storage drive not keeping up with SS. It always happened when the drive RAID was nearing full. It never was a problem, but keep recurring. I had to manually erase my array. I tried APFS vs HFS. No difference. I switched RAID modes. It seemed my drives were simply no longer fast enough to keep up with deleting files and writing files at the same time.
Then it dawned on me. I've always had a manually set threshold for file deletion. That was set to leave a small fraction of the storage drive free, BUT the number of cameras and bitstream rates has been growing over the years. The daily write total was getting big compared to the free space I had manually specified.
Bumping up the space to be kept free to about 1/2 day of data seems to have solved the issue. Drive is once more keeping up with writing and deleting!
I know there is also an option to let SS automatically set the deletion threshold. That likely would have prevented my problem. If you use a manual deletion threshold, watch out for how your daily write total compares to that space. I surmise that if that free space gets too small, SS writes out too much data before SS gets a chance to delete the next cadres of old files. As a result, available drive space goes to zero and you no longer get any effective recording happening.
Comments
This is very good advice - drives slow down significantly when they approach capacity, and leaving plenty of free space on the drive is a good way to ensure smooth error-free operation.
The "automatic" setting was introduced relatively recently, and is a big improvement over the previous scheme where a fixed threshold had to be set. The best threshold to use varies dramatically on the size of the drive, and this is what the new automatic setting is designed to optimise for.
It might be worth SS giving a (one time) notification when manually set threshold reserves less free space than the automatic setting would have preserved.
Good idea - we'll add a warning in the Preferences window if a manual threshold is set too low.
On the subject of Drives.... Ben, I had posted about, and had to write a script, which would wait a long time before launching SS because my drives took forever to mount. You later added an option to specify a wait time. I can tell you, after reading a bit, that the issue for me was my External drives were formated with APFS. Even after 10 minutes, they were not mounted ... they seem to get worse as they fill up too. BTW, these arent even drives used by SS, but it also slowed the mounting of my SSD, which IS used by SS. I Reformated all the external HDD's to HFS+ and mounting time dropped from 10 minutes / not at all, to 17 seconds! FYI. Using an SSD for storage, and then copying over to HDD at night has worked out just fine for anyone looking for more speed for SS.
Also doing RAID 0 now on the external HDD's, for those of you who might want to look into that. RAID 0 just speeds up your read/writes. Built into the OS, no extra RAID software needed!
Hi @Senna_F1 this is very interesting, I haven't heard of this particular problem with APFS drives. A multi-minute wait for mounting is highly anomalous and indicates some kind of malfunction. These days we generally see more problems with HFS+ drives that are resolved by switching to APFS, rather than the other way round. It's interesting that your experience runs contrary to this.
I think I started here:
https://forums.macrumors.com/threads/mounting-external-drives-is-super-slow.2238925
But lots of good tech info here:
https://bombich.com/blog/2019/09/12/analysis-apfs-enumeration-performance-on-rotational-hard-drives
And although I wasn't using HDD drive as a boot device, the rest of this description from the linked article above fits my scenario, right down to the 8 minutes to start up.
"The other time that the performance difference will be starkly noticeable is when you're booting macOS from a rotational HDD. macOS seeks and stats thousands of files during the startup process, and if that's taking 15 times longer on a rotational disk, then your 30-second startup process is going to turn into 8 minutes. When the system does finally boot, the system and apps will still feel sluggish as all of your applications load, and those applications each stat hundreds of files. The system is still usable, especially as a rescue backup device, but it's not the kind of experience you'd want for a production startup disk nor for a high-stress restore scenario"
Something is terribly awry with that RAID or another process / computer is tying it up.
I use a 24 TB Raid 1 with SS, USB 3.0 connects. It never takes more than a five to ten seconds to mount with either HFS or APSF formatting. This is true even when it is full of motion and continuous recordings. That's with over 20 cameras worth of files on it.
I never had an issue with Raid. My 24TB Raid 0 is doing great.
I recently replaced my 1TB external SSD with a 4 TB external SSD. When I saw this discussion, I changed from a manual setting to the automatic one.
I'm recording everything in 1 hour videos, and now have 9 cams, so I'm getting 9 folders a day, each with 24 videos.
Although the older files are being deleted, their empty folders remain on the drive.
Is there a setting I can change to have SS automatically remove the empty day's folders?
Hi @Sawmill - the empty folders will be deleted eventually, simply ignore them.
File deletion works by doing a big scan of the drive to build up lists of files to delete. The files are then gradually deleted until a rescan is necessary. Empty folders will be deleted only during this rescan. Depending on the turnover of files being created and deleted, the time between rescans can vary considerably (e.g. every day on some systems, every week on others).