Heya, Linda. This is an excellent topic. If the moderation team likes, we may move this thread to a whole new forum inside “Recording, Producing, and Mixing”. In the meantime:
Decades ago I tried to save on storage by doing selective backups. I learned quickly that “there’s always something you miss” and regret. Fast forward to at least 1995, and I am backing up everything. It was all in-house, though. The first line of backup is a full system backup to allow bare metal restore, using Carbon Copy Cloner for the macs (I hate Time Machine, it’s so awkward), and Macrium Reflect for the Windows machines. That’s for the operating systems, programs and scarce local data. All documents, audio and video files are stored on two giant servers each with over 100 TB of storage and RAID 6 (double redundant drives). One 100 TB server mirrors the other! So it’s at least double redundant, but only locally.
Some of the audio/video/photo portion of the network has already been converted to 10 Gbps and in the next month the last leg of it will be upgraded to 10 Gbps!
But I was not doing cloud or offsite backup until almost a year ago. Because I’m stuck with Spectrum Cable and the uplink speed was “only” 20 Mbps. I run an FTP server and couldn’t spare the upload bandwidth to run archiving.
I tried and tried to get fiber to the curb but it’s evidently not happening any time soon in my neighborhood. So I upgraded to Spectrum Turbo, and lo and behold, my 30 Mbps rated uplink is actually giving me 40 Mbps+!!! So it’s a little fast I guess.
By this calculator: https://wintelguy.com/transfertimecalc.pl, it would take 116 DAYS to transfer 50 TB of information at 40 Mbps. In my case I’m not transferring the audio data since I’m a mastering house and 99% of the audio data is not my creation. After a project is finished and the client is happy, I archive the audio data locally to hard disks and take it off the server.
So I started doing cloud backup. I tried Carbonite for a while and that was a dud, just hard to maintain. I tried Backblaze for a while and it just wasn’t giving me enough detailed feedback and it seemed to be selectively uploading and missing data. Finally, I moved to Amazon S3 and I’m pleased.
I run the Amazon upload from a backup plugin on one of the servers. To keep the FTP server bandwidth happy I limit my backup upload bandwidth to 5 Mbps. That leaves more than 35 Mbps for my clients to download their masters, etc. It isn’t too bad… even the largest audio uploads of full albums recorded at 96 kHz are fast enough for the customers not to be complaining, so far.
So, Amazon started up. The first thing I decided to back up was Mary’s precious high res photo collection as she is a professional photographer. At this point it’s taking up about 10 TB and growing. To start, I temporarily upped the upload bandwidth to 15 Mbps and it took about 3 months to get all of her photos onto Amazon. I then moved on to documents, video and miscellaneous. By about six months, all of my precious data is now in Amazon cloud. The server now keeps it refreshed and keeps it up and reports that all the data is up to date in the cloud. To be honest, I haven’t thought about incrementals at Amazon. I don’t know if that’s happening. For the most part, any time one of us updates a document or a session, it’s a new file name-version so that’s how we handle incrementals, I guess.
The cloud is our emergency storage. My two servers are my lifeline. I have protection against accidental deletions (which happens!!!!!!!) for up to 120 days. So if we discover a deletion within 120 days I can recover it from the server. The redundant servers allow me to fix big local mistakes like an entire project that was accidentally deleted by an assistant when he was archiving audio and he hit the wrong button. Mouse clicks!!!! Never turn your back on computers!
Hope this helps!