Novicane All American 15416 Posts user info edit post |
anyone using this?
At 0.004 per gig seems like an awesome deal to backup some mess and get away from these 1TB drives. 1/1/2018 12:19:47 PM |
CaelNCSU All American 7132 Posts user info edit post |
I put some stuff in Glacier via the Java API.
Long story short no one, including Amazon Engineers, could figure out how to delete it and stop it from billing me. I had to delete my AWS account.
Allegedly if you use S3 to start and convert to glacier it deletes no problem.
[Edited on January 1, 2018 at 7:38 PM. Reason : a] 1/1/2018 7:38:07 PM |
Novicane All American 15416 Posts user info edit post |
is it encrypted at rest on their servers? like i don't need my illegal mp3 collection to trip off alarms. 1/1/2018 8:15:12 PM |
CaelNCSU All American 7132 Posts user info edit post |
You could encrypt it before uploading it. Probably do it in one line with gpg and aws s3 ... 1/1/2018 11:54:41 PM |
TJB627 All American 2110 Posts user info edit post |
Make sure you look into the retrieval prices too should you ever need to do that. 1/2/2018 9:31:53 AM |
FroshKiller All American 51913 Posts user info edit post |
Thank you for the possibly least helpful comment. 1/2/2018 10:47:32 AM |
smoothcrim Universal Magnetic! 18968 Posts user info edit post |
spoken like someone with no actual experience on the platform. glacier is so cheap because its meant to be almost never accessed and high(er) speed accesses cost way more.
I would highly recommend using s3 with a lifecycle policy to move it to infrequently accessed or glacier depending on your use case. putting it in s3, you can use server side encryption with a user provided or aws provided kms managed key
[Edited on January 3, 2018 at 12:19 AM. Reason : .] 1/3/2018 12:18:39 AM |
Novicane All American 15416 Posts user info edit post |
this would be data i wouldn't access but maybe once a year. 1/4/2018 8:17:05 AM |
wwwebsurfer All American 10217 Posts user info edit post |
I always treated glacier like off-site tape backup. If you think you'll need it all, use s3. Expedited retrieval seems tempting, but those prices balloon quickly on small things like photos. And I believe all those rates are the 1 at a time rate. Open up a dozen in parallel like filezilla and your cost balloons.
The auto-decay s3 to glacier has bit me in the past too. Ideally you want chunks around 1GB in size - tarball folders first to save on request fees (on only set your video folder to decay in the first place)
All these problems are easily solved with some software. I have a synology at the house and wouldn't hesitate to offload my Blu-ray backups to glacier. It has built in throttling, encryption, and compression. As long as you're willing to watch that video tomorrow... 1/8/2018 3:32:56 AM |
smoothcrim Universal Magnetic! 18968 Posts user info edit post |
you could setup a lambda based workflow to tarball data before moving it to glacier. not quite as transparent as a lifecycle policy, but not all that different.
actually it would just be lifecycle hooks (represented as '->') s3 -> IA s3 and that notification triggers a lambda to tarball -> glacier
[Edited on January 8, 2018 at 7:09 AM. Reason : .] 1/8/2018 6:57:41 AM |
TJB627 All American 2110 Posts user info edit post |
Quote : | "spoken like someone with no actual experience on the platform. glacier is so cheap because its meant to be almost never accessed and high(er) speed accesses cost way more. " |
Yep. That's exactly why it's so cheap. While storage of the data is dramatically cheaper in Glacier, if you need to retrieve that data fast, the retrieval actually costs more than retrieval from S3. A lot of people don't think of the retrieval prices when considering online backups, that was my only point.1/8/2018 10:31:17 AM |
smoothcrim Universal Magnetic! 18968 Posts user info edit post |
I was talking to frosh's ignorant comment, not your informed one. 1/8/2018 11:04:47 AM |
FroshKiller All American 51913 Posts user info edit post |
I was taking issue with "should you ever need to do that." Backing up data implies you expect someone will need to retrieve it. If you had data you never had a need to retrieve, you'd choose to delete it instead. 1/8/2018 11:20:48 AM |
BigMan157 no u 103354 Posts user info edit post |
Quote : | "you could setup a lambda based workflow to tarball data before moving it to glacier" |
wouldn't lambda max execution be too short for that for very large things?1/8/2018 12:11:05 PM |
smoothcrim Universal Magnetic! 18968 Posts user info edit post |
a tar process on a 1gb lambda should finish in 5min for 1gb chunks of data. if it doesn't you could start with a HEAD to find out how big your job is and then fan out to N lambdas to do the actual tar'ing.
^^ lots of data is ideally never accessed, and only backed up for DR and/or regulatory reasons. 1/8/2018 1:27:13 PM |
TJB627 All American 2110 Posts user info edit post |
Agreed with SmoothCrim. If you're following the 3-2-1 rule, I'd hope you never have to retrieve that data from Glacier to be honest. 1/8/2018 2:56:44 PM |
FroshKiller All American 51913 Posts user info edit post |
Redundancy, disaster recovery, and regulations all imply intent to retrieve. "Backing up" implies intent to retrieve. Whether you think it's likely that someone will actually have a need to retrieve a specific copy of the data is irrelevant. "Should you ever need to [retrieve your data]" is a dumb thing to say when you're talking about backups.
Level your reading comprehension above that of a fucking seven-year-old before you try to condescend to me about the fundamentals of fucking cold storage. 1/8/2018 3:33:04 PM |
smoothcrim Universal Magnetic! 18968 Posts user info edit post |
the don't imply an intent to retrieve, that's why they're "backups" they are backup plans for you actually intend to retrieve. 1/8/2018 6:09:46 PM |