Welcome to the twelfth issue of Last Week in AWS.

Last week featured the AWS Community Day in San Francisco. It was a blast to be able to speak there; I’ll be giving updated versions of my AWS Cost Control talk at both SREcon Europe and All Things Open later this year.

My thanks to Datadog for sponsoring this issue:

Cloud-scale monitoring, from AWS to ZooKeeper – Ever wish you could graph all your AWS metrics, correlate them with 150+ other techs, and set up sophisticated alerts? There’s a monitoring service for that: It’s called Datadog. Here’s a free trial.

Community Contributions

A nice “here be dragons” overview of AWS ElasticSearch, as told by someone who was bitten in the face by said dragons.

A study of how various languages, memory sizing, and package size affect the time it takes to cold-start in AWS Lambda.

This serves more as a proof of concept than a “you should do this in production,” but it’s a great introduction to what getting unikernels into AWSlooks like.

An exploration of AMI provisioning approaches, written in a very approachable, deeply human voice. Would link again.

Despite the snark I throw their way, Amazon does an awful lot of good things; they expand the reaches of how we interact with the internet, they release new technologies that give rise to entire industries, and they do a lot of inspirational work. Suing a former AWS employee under the auspices of a noncompete is not one of those laudable things. This is not Amazon at its best by a long shot. We as an industry employ people; we do not own them. This presents as punching down in a very unsympathetic way, and I expect better from an industry leader. While there’s assuredly another side to this story, the optics are horrible.

Choice Cuts From the AWS Blog

Third AZ in EU (Frankfurt) Region – Along with this week’s announcement of a new GovCloud region on the US east coast, it’s apparent that Amazon’s datacenter construction team is finding ways to keep themselves busy.

New – Auto Scaling for Amazon DynamoDB | AWS Blog – DynamoDB is now able to be scaled up and down automatically based upon load. Automation has finally gotten around to coming for your jobs, Manual DynamoDB Scaling Engineers.

Using Amazon Rekognition to Identify Persons of Interest for Law Enforcement – It’s always nice to see law enforcement getting access to large scale processing power. I can’t wait to see the followup post where they pair this with realtime cameras in public areas to hasten in a dystopian future; I’m sure that will raise absolutely no civil liberties issues whatsoever. Setting the creepy aspects aside, see if you can spot the optimization opportunities in the worfklow the blog post presents– I count no fewer than three myself.

Latency Distribution Graph in AWS X-Ray – The thinly disguised collection of Lambda functions in a trench coat that goes by the name “Randall Hunt” is back, with a demo of displaying latency distribution graphs in AWS X-Ray.

Amazon Aurora Introduces Database Cloning Capabilities – You can now clone your Aurora databases for reporting, development, A/B testing, and other various reasons, without the traditional lag of standing up new RDS instances, or incurring storage charges. Let’s hope that Amazon’s foray into cloning goes better than the Sheep Incident did.

Amazon Rekognition Now Available in AWS GovCloud (US) Region – Strap on your tinfoil hat; facial image recognition is now available in GovCloud. I’m sure nobody anywhere will jump to conclusions.

Tools

amicleaner lets you remove snapshots from deleted AMIs. It’s a swell housekeeping tool that lets you get rid of some of the cruft that accumulates in an AWS account.

It’s always fun to have a tool that will systematically blow away every last resource in your AWS account. Please don’t test this in production.

Pinterest has open sourced their EC2 inventory store. This nicely gets around internal API limits for querying AWS, retains information for terminated instances, and allows for more complex queries.

Tip of the Week

A friend ran into an EFS issue this week. It turns out that as per AWS support, EFS filesystems have a baseline throughput, and will then use burst credits when that baseline gets exceeded. The baseline in question is determined by the size of the volume.

Since EFS only is sized to “the data you store on it,” you can force better performance by creating “dummy files” via something like dd if=/dev/zero of=/path/to/efs/dummyfile bs=1M count=1024. This is simultaneously useful to know, and horrifyingly awful.

…and that’s what happened Last Week in AWS.

Newsletter Footer

Sign up for Last Week in AWS

Stay up to date on the latest AWS news, opinions, and tools, all lovingly sprinkled with a bit of snark.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Sponsor Icon Footer

Sponsor a Newsletter Issue

Reach over 30,000 discerning engineers, managers, and enthusiasts who actually care about the state of Amazon’s cloud ecosystems.