panasecurity.blogg.se

Cyberduck s3 thumbnail
Cyberduck s3 thumbnail





cyberduck s3 thumbnail

I guess that the pricing structure of free uploads and expensive downloads is the main thing that caused it to be compared to Glacier and other long-term storage solutions like Nearline. I suppose there are different use cases that work well for B2 and others that don't work well, but that is up for customers to decide, not Backblaze. >Long term storage is a great use of B2, but it was never "intended" as anything other than the ability to reliably store blobs of data and retrieve them really fast whenever and how ever often you like. Right now, I'd have to re-create the still-valid backup data out of on-hand information, upload the new archive, then send a command to delete the old one. But as files age out of the archive, I'd like to be able to move active ones out of it and into a new one. My other option is to run everything locally, and compute a delta file between versions of the DB and send that delta over to the vault (i.e., do D2D2C backups).Įdit: Another possible requirement - I'd like to pack multiple smaller files into an archive file, and deposit that into a bucket. I'd like to add cloud capabilities, but for that I'd have to run the backend process somewhere that can easily update the DB, and easily read/write the data in the vault, along with syncing the full DB to the vault location after a backup. Backup data is sent to it by piping in tar format to the main process, which updates the catalog DB and deposits files into the storage vault. I can give you my example - I have an open source backup tool called Snebu which uses an sqlite DB to store the backup catalog (file names, file hash, metadata etc), whereas the individual files are stored in a vault area (currently just a disk volume, with files named after the sha1 hash).







Cyberduck s3 thumbnail