I’ve periodically remarked that AWS has so many EC2 instance families that they’ve declared war on the English alphabet.
Today, they’ve announced a new feature that shows they’re committed to winning that war.
I mean, think about it: With as many instance families as AWS has to offer, how do you know you’re on the right instance size and family for your workloads? You almost certainly won’t, but AWS would like you to buy a three-year RI for it anyways. Since Savings Plans don’t care which EC2 instances you use, switching instance families is a lot more practical for an awful lot of environments.
First, let’s give a bit of background here. I’ve said previously that as a global panacea, right sizing your instances is nonsense because many application workloads can’t sustain the disruption, and thus in those situations, an awful lot of manual work is required.
I stand by that assessment. But it’s good to know what instances your workloads should be running on. With roughly 200 options in a single region, it’s a safe bet you got it wrong.
I’ve been bearish on AI/ML for a long time. “These things are the future!” AWS proclaimed.
My counterpoint: If that’s so, how about demonstrating it on the bounded problem space of your own bills?
Today, they released something that shuts me right up on that score.
Introducing Compute Optimizer
Compute Optimizer, which you can access via the AWS Console, analyzes your workloads and spits out the recommended instances your workloads should be running on. Yes, really, it is that simple—and something we should have had years ago.
At the time of this writing, Compute Optimizer supports M, C, R, T, and X families, and their AMD variants where applicable—which means that if this sees widespread adoption, I predict that Intel’s AWS market share is going to take a plunge.
Curiously, it doesn’t currently support the GPU-based instances, the so-inexpensive-they-might-be-stolen “I” family, and the custom ARM “A” families—along with a host of miscellaneous, esoteric instance types I could name, but frankly, I might just be making them up given how little I see them in the wild. That said, this new tool does cover the lion’s share of worldwide EC2 instance usage by my math.
Once enabled, Compute Optimizer presumably performs an initial analysis. You’ll have to wait 12 hours before it starts making recommendations, which are based on your existing CloudWatch metrics.
But note that it needs at least 30 hours of metric data before it can make any recommendations. You’d do well to ensure that those 30 hours are representative: Well, this is the baseline load for the past couple of days, let’s resize, ignoring entirely that our Super Bowl commercial airs tomorrow is definitely a bad idea.
Note as well that this integrates with AWS Organizations, so you won’t have to run this separately in every linked account. (My god, could you imagine how much work that would be?!)
One other point to note: It works on Auto Scaling Groups when their membership consists of supported instance families, rather than just standalone EC2 instances. In short, “however you’re using EC2, you’re probably covered.”
There’s one catch, though: Only hypervisor metrics are taken into consideration by default, which excludes memory utilization.
If you want host-level metrics such as memory, you’ll need to go through the somewhat byzantine process of installing the CloudWatch agent on the nodes you want to analyze. If this step is skipped, the Compute Optimizer will not recommend instances with lower RAM than what you’re currently using.
After the Compute Optimizer completes its analysis, it’ll give you up to three recommended courses of action and show you how each of the recommendations would have handled your workloads based on the historical usage data.
Is it a global solution that enables instance rightsizing for everyone? Far from it.
But it’s better than what we’ve had to work with so far, it’s free, and it’s an indicator that AWS is finally starting to turn its AI/ML focus toward the most intractable of problems: the AWS bill.