One of the neat things about AWS is that it’s almost twenty years old. One of the unfortunate things about AWS is… that it’s almost twenty years old. If you’ve been using the platform for a while, it can be hard to notice the pace of change in the underlying “foundational” services. More worryingly, even if you’re not an old saw at AWS scrying, it’s still easy to stumble upon outdated blog posts that speak to the way things used to be, rather than the way they are now. I’ve gathered some of these evolutions that may help you out if you find yourself confused.

EC2

In EC2, you can now change security groups and IAM roles without shutting the instance down to do it. 

You can also resize, attach, or detach EBS volumes from running instances. 

As of very recently, you can also force EC2 instances to stop or terminate without waiting for a clean shutdown or a ridiculous timeout, which is great for things you’re never going to spin back up. 

They also added the ability to live-migrate instances to other physical hosts; this manifests as it being much rarer nowadays to see an instance degradation notice. 

Similarly, instances have gone from a “expect this to disappear out from under you at any time” level of reliability to that being almost unheard of in the modern era. 

Spot instances used to be much more of a bidding war / marketplace. These days the shifts are way more gradual, and you get to feel a little bit less like an investment banker watching the numbers move on your dashboards in realtime. 

You almost never need dedicated instances for anything. It’s been nearly a decade since they weren’t needed for HIPAA BAAs. 

AMI Block Public Access is now default for new accounts, and was turned on for any accounts that hadn’t owned a public AMI for 90 days back in 2023.

S3

S3 isn’t eventually consistent anymore–it’s read-after-write consistent.

You don’t have to randomize the first part of your object keys to ensure they get spread around and avoid hotspots. 

ACLs are deprecated and off by default on new buckets.

Block Public Access is now enabled by default on new buckets.

New buckets are transparently encrypted at rest. 

Once upon a time Glacier was its own service that had nothing to do with S3. If you look closely (hi, billing data!) you can see vestiges of how this used to be, before the S3 team absorbed it as a series of storage classes. 

Similarly, there used to be truly horrifying restore fees for Glacier that were also very hard to predict. That got fixed early on, but the scary stories left scars to the point where I still encounter folks who think restores are both fiendishly expensive as well as confusing. They are not.

Glacier restores are also no longer painfully slow.

Networking

Obviously EC2-classic is gone, but that was a long time ago. One caveat that does come up a lot is that public v4 IP addresses are no longer free; they cost the same as Elastic IP addresses. 

VPC peering used to be annoying; now there are better options like Transit Gateway, VPC sharing between accounts, resource sharing between accounts, and Cloud WAN. 

VPC Lattice exists as a way for things to talk to one another and basically ignore a bunch of AWS networking gotchas. So does Tailscale.

CloudFront isn’t networking but it has been in the AWS “networking” section for ages so you can deal with it: it used to take ~45 minutes for an update, which was terrible. Nowadays it’s closer to 5 minutes—which still feels like 45 when you’re waiting for CloudFormation to finish a deployment.

ELB Classic (“classic” means “deprecated” in AWS land) used to charge cross AZ data transfer in addition to the load balancer “data has passed through me” fee to send to backends on a different availability zone. 

ALBs with automatic zone load balancing do not charge additional data transfer fees for cross-AZ traffic, just their LCU fees. The same is true for Classic Load Balancers, but be warned: Network Load Balancers still charge cross-AZ fees!

Network Load Balancers didn’t used to support security groups, but they do now. 

Availability Zones used to be randomized between accounts (my us-east-1a was your us-east-1c); you can now use Resource Access Manager to get zone IDs to ensure you’re aligned between any given accounts.

Lambda

Originally Lambda had a 5 minute timeout and didn’t support container images. Now you can run them for up to 15 minutes, use Docker images, use shared storage with EFS, give them up to 10GB of RAM (for which CPU scales accordingly and invisibly), and give /tmp up to 10GB of storage instead of  just half a gig.

Invoking a Lambda in a VPC is no longer dog-slow.

Lambda cold-starts are no longer as big of a problem as they were originally.

EFS

You no longer have to put a big pile of useless data on an EFS volume to get your IO allotment to something usable; you can adjust that separately from capacity now that they’ve added a second knob.

EBS

You get full performance on new EBS volumes that are empty. If you create an EBS volume from a snapshot, you’ll want to read the entire disk with dd or similar because it lazy-loads snapshot data from S3 and the first read of a block will be very slow.  If you’re in a hurry, there are more expensive and complicated options

EBS volumes can be attached to multiple EC2 instances at the same time (assuming io1), but you almost certainly don’t want to do this.

DynamoDB

You can now have empty fields (the newsletter publication system for “Last Week in AWS” STILL uses a field designator of empty because it predates that change) in an item. 

Performance has gotten a lot more reliable, to the point where you don’t need to use support-only tools locked behind NDAs to see what your hot key problems look like. 

With pricing changes, you almost certainly want to run everything On Demand unless you’re in a very particular space.

Cost Savings Vehicles

Reserved Instances are going away for EC2, slowly but surely. Savings Plans are the path forward. The savings rates on these have diverged, to the point where they no longer offer as deep of a discount as RIs once did, which is offset by their additional flexibility. Pay attention!

EC2 charges by the second now, so spinning one up for five minutes over and over again no longer costs you an hour each time.

The Cost Anomaly Detector has gotten very good at flagging sudden changes in spend patterns. It is free. 

The Compute Optimizer also does EBS volumes and other things. Its recommendations are trustworthy, unlike “Trusted” Advisor’s various suggestions. 

The Trusted Advisor recommendations remain sketchy and self-contradictory at best, though some of their cost checks can now route through Compute Optimizer.

Authentication

IAM roles are where permissions should live. IAM users are strictly for legacy applications rather than humans. The IAM Identity Center is the replacement for “AWS SSO” and it’s how humans should engage with their AWS accounts. This does cause some friction at times.

You can have multiple MFA devices configured for the root account. 

You also do not need to have root credentials configured for organization member accounts.

Miscellaneous

us-east-1 is no longer a merrily burning dumpster fire of sadness and regret. This is further true across the board; things are a lot more durable these days, to the point where outages are noteworthy rather than “it’s another given Tuesday afternoon.”

While deprecations remain rare, they’re definitely on the rise; if an AWS service sounds relatively niche or goofy, consider your exodus plan before building atop it. None of the services mentioned thus far qualify. 

CloudWatch doesn’t have the last datapoint being super low due to data inconsistency anymore, so if your graphs suddenly drop to zero for the last datapoint your app just shit itself. 

You can close AWS accounts in your organization from the root account rather than having to log into each member account as their root user.

Thanks

My thanks to folks on LinkedIn and BlueSky for helping come up with some of these. You’ve lived the same pain I have.