Welcome to the 17th issue of Last Week in AWS.
Last week AWS started sending out emails to the owners of every S3 bucket that was world readable. I was devastated to learn that I’d left Twitter for Pets vulnerable.
Please express my apologies to our furry friends.
My thanks to Security Newsletter for sponsoring this issue:
When building Last Week in AWS, I find myself going through a lot of sources. Security Newsletter is consistently one of the best I’ve found on all things InfoSec related. If security matters to you, check it out. If security doesn’t matter to you at all, Dow Jones is probably hiring.
One of the more irritating aspects of comparing cloud computing vendors is the difficulty of getting apples-to-apples comparisons. Microsoft’s numbers came out as stratospherically high– but Office365 numbers are lumped in along with Azure’s. Stack Overflow has written a data driven blog post analyzing who’s using which provider.
Increment has come out with their second issue, featuring an article on capacity planning in cloud environments by Patrick McKenzie, more commonly known online as @patio11. Patrick has a knack for taking a topic I’m interested in and crafting approachable, well written long form posts about it.
[The mystery of the hanging S3 downloads – In my younger days, I was a network engineer with a fascination for tracking down the unexplainable. This “whodunnit” style blog post tracks a failure of S3 (and only S3) from one location. This is a fascinating read if you’re into solving mysteries.
The Register has a cynical, snarky view towards tech news that I’ve long appreciated. Last week they took on the rise of Kubernetes, and what it means for AWS.
Cloudonaut rides again– this time taking us on a survey tour of the amazing things CloudWatch offers. The unspoken subtext of this article is “CloudWatch does all this awesome stuff for you, WHY AREN’T YOU USING IT?!”
A best practices guide to avoiding S3 data breaches that manages to avoid blaming the victim. This stuff is very easy to get wrong; I like this approach.
Choice Cuts From the AWS Blog
The AWS IAM Console Now Remembers Your Preferences for Table Column Selections and Policy Viewing and Editing – The IAM console finally remembers your columns and filters between sessions, as opposed to its historical “NEW PHONE WHO DIS” approach. Please extend this to other areas of the console, Amazon.
Lambda@Edge now Generally Available – The node.js you know and love from your web browser / serverside implementations / text editor is now available in your CDN as well. Given its complete lack of a formalized SLA, “Generally Available” is indeed an apt descriptor.
Running Salt States Using Amazon EC2 Systems Manager – As one of the very early developers behind Salt Stack, it’s wonderful to see AWS itself talking about a tool that I maintain doesn’t receive enough love. Thanks, AWS; no snark in this paragraph.
Monitor and Notify on AWS Account Root User Activity – Please please please stop using the root account for anything other than building your first IAM user. Then implement four other AWS services in a Rube Goldberg style pipeline to validate that you aren’t.
Coming Soon: Improvements to How You Sign In to Your AWS Account – Helpdesks, start your engines. The IAM user sign-in page is about to change, leading to a bunch of frantic user calls. On the plus side, you no longer have to deal with team-specific IAM login pages.
Amazon Web Services Elastic Compute Cloud (EC2) Rescue for Linux is a python-based tool that allows for the automatic diagnosis of common problems found on EC2 Linux instances. Automatic diagnosis of networking issues on EC2 instances? Sign me up!
The NCC Group is an auditing firm that brings shockingly advanced levels of technical acumen to the auditing space. Their open source Scout2 tool identifies a number of esoteric yet easy to make mistakes in AWS accounts. I wish I’d found this ages ago.
Terrafam (I promise that isn’t a typo) wraps Terraform to better manage IAM permissions. If you’re already using Terraform it’s well worth a look. If you’re not using Terraform, you likely have bigger fish to fry unless you’re deep into CloudFormation.
Tip of the Week
This week’s tip comes to us courtesy of @skimbrel:
When you restore an EBS snapshot to create a new EBS volume, the blocks of that snapshot are lazily loaded from S3 the first time they’re read. If you care about latency (think databases) on that volume, you’ll want to scan the entire volume using something like
dd and pipe the output to
/dev/null. This is mentioned in passing in the official EBS volume restoration guide, but it’s very easy to overlook.
If you’ve ever wondered why newly created EBS volumes from snapshots take a while to not have really crappy IO, this may be your answer.
…and that’s what happened Last Week in AWS.