There’s a joke that I’ve always been partial to: a software engineering type rubs a lamp and a genie appears. The genie says that he’ll grant the engineer $1 billion, but only if they can spend $100 million in a single month with three rules. “You can’t gift it away. You can’t gamble with it. And you can’t throw it away.” The software engineer responds with “Well, can I use AWS?” The genie responds with “okay, there are four rules.”

So begins a reddit thread last week, and it got at a very interesting point given how abysmal AWS’s free tier is. Without reaching out to support, is it possible to spend $100m in a month, starting from zero, in an AWS account without increasing a service quota or otherwise talking to AWS about what you’re up to?

I’ve added some restrictions as well: it’s cheating to just go out and buy a bunch of 3 year RIs or Savings Plans; that’d be “one and then done” hack that wouldn’t make for a very interesting post. The actual dollar cost you’ll be able to get to is wildly variable depending upon a variety of factors (and let’s be serious; I’m not fool enough to try this in my account), but it’s comfortably in excess of the genie’s requirements.

Let’s be clear: I wouldn’t expect this to actually work. AWS is likely to have some alarms set to trigger when a brand new account begins tracking towards being what is almost certainly their largest customer within a matter of days; beyond that they’ve got a bunch of hard limits that my solution almost certainly smacks directly into. Take this with a grain of salt–or at least, with somebody else’s AWS account instead of your own.

Let’s burn some money.

Note that AWS advertises 81 availability zones across 25 regions. We’ll knock 4 regions and 12 AZs off that list, because you need to talk to AWS to get access to GovCloud or the mainland China regions. That leaves us with 21 regions and 69 availability zones. We’ll also assume 720 hours in a month.

We’ll start by misconfiguring some things that max out the AWS service quotas and will make everything we do that much more expensive. We’re allowed up to 5 NAT gateways per availability zone which will be expensive in their own right, but passing everything else we do through them will lead to a 4.5¢ surcharge per GB–and we’ll be doing a lot with data transfer immediately. They also cost us 4.5¢ per hour in us-east-1; they cost significantly more in other regions, and will land somewhere around $13K a month so far. We’ll also turn on 5 CloudTrail trails per region out of spite, but it’s hard to determine how much they’ll cost.

Interface endpoints for VPCs start at 1¢ per hour; you’re limited to 5 VPCs per region, and 50 endpoints per VPC. That’s 250 endpoints per region at 1¢ per hour each, times 21 regions, meaning that you’re around $40K a month so far for those.

DynamoDB offers 80K reads and 80K writes per account per month in provisioned capacity, so no region magic here past setting up two tables with 40K each in different regions and then replicating between them. The write provisioning will cost $56,160 and the read provisioning another $11,232.

We’ll also spin up as many EC2 instances as we can; we’re limited by default to 5 running instances per region. We’ll pick the beefiest instances we can with 100gbps networks; m5dn.24xlarge in this case. Assume we’ll only be able to run four of these at a time; their cost is inconsequential as you’re about to discover.

Let’s be incredibly conservative and assume that this gets us to maybe $100K. Seems like we’re short of our goal by a thousandfold. It’s not looking good for our software engineer, is it?

Now let’s burn the rest of it.

And here’s where we turn on the magic of S3. We start with 4 buckets in each region. We then configure those buckets to store everything as S3 Infrequent Access storage class and enable versioning. We further set up every bucket to replicate to a bucket set in São Paulo region, which in turn replicates outward to another region; that replication traffic from São Paulo will cost $0.138 per GB. Assuming four instances that are able to speak close to line rate to S3 (multiple buckets or endpoints potentially, to avoid throttling) that’s 12.5GB per second. That means that per second we’re spending 56¢ in NAT gateway data processing charges, $1.72 per second in replication charges from São Paulo elsewhere, a variable charge of at least 25¢ per second getting the data replicated to São Paulo, and storage charges that we’ll get to in a minute. For our 720 hour month, that comes to the princely sum of $6,557,760 for a complete month. Seems a bit short, right? That’s per region. Add in the remaining 20 other regions, and we’re at $137,712,960 for the month. But we’re not done yet. We’re also not acknowledging the uncomfortable truth that you can set up circular replication through all of your buckets back and forth through São Paulo; it’s very hard to determine exactly how quickly this will rack up costs just because I’ve never tried to replicate geometrically increasing datasets through S3 cross-region as fast as possible.

Enter the storage. After a month of this behavior, we’re storing 680PB in S3 across the board. That’s a minimum of $15,430,355 additional when stored in Infrequent Access–but there’s still more that’s about to blow this out.

Enter AWS Lambda. We’ll be setting up a fleet of Lambdas to constantly transition the storage classes of objects back and forth between S3’s Infrequent Access and One Zone Infrequent Access tiers. One of the caveats of these storage tiers is that there’s a minimum object storage charge of 30 days. By “toggling” these back and forth, we’ll blow the S3 bill further into the stratosphere–at the end of the month, every transition has a minimum cost of $15.5 million or so. Even doing this once an hour for the 30th day of the month still runs up in excess of a $336 million bill.

It almost seems unsporting to enable things like AWS’s Enterprise Support, Macie on 5TB of S3 data per region, a bunch of Config rules, some arbitrary S3 object lambdas, etc–but I’d still do it anyway just to prove the point.

Caveats

We may not quite be able to do things like “get data transfers to S3 quite up to line rate,” there may well be undocumented shutoff limits specifically to avoid surprising someone with a half billion dollar bill at month-end, and we might in fact saturate some cross-region links which could slow us down (hence the significant overshooting).

But I’ll point out that all of this is doable, or close to doable within AWS’s “Free Tier,” without ever speaking to someone from AWS directly.

Doesn’t that make you feel warm and cozy? Perhaps it’s time to reconsider how the free tier works.