A recurring theme has arisen recently: I find myself wishing for a version of an AWS service that charged me less in return for being actively worse at what it did.

This isn’t me hoping for a Chaos Engineering region that breaks intentionally so I can shore up my application’s durability.

Instead, it’s me wishing I had a more cost-effective way to make tradeoffs in my architecture or application design that let me worsen my experience of a service in return for economic gains.

What the hell are you talking about?

Let’s say that I believe Serverless is a hype-driven fad designed to boost dependence upon a cloud vendor. (It isn’t; it’s rather a hype-driven fad designed to let the cloud vendor play “musical chairs” with their equipment without me ever knowing.) I’m going to go all-in on EC2.

When it comes to selecting an instance, I have options upon options. I can pay for a lot of CPUs or a few CPUs. I can pay for lots of RAM or next to no RAM at all. I can pay for lots of disk or little disk—and then decide just how fast or slow I want that disk to be. I can select an instance that offers blazing-fast GPUs, or I can remember that AI/ML is a gold rush and AWS is selling me a pickaxe and thoughtfully decline the generous offer.

Suffice it to say that there are an awful lot of knobs or dials that I can turn that will impact how my workload performs—and also how the economics of my workload shape up.

But then we get to data transfer.

Look, I’ve been deep into the weeds of data transfer pricing with AWS a few times; it’s byzantine and expensive. One thing I haven’t spoken about overly much is that it’s also a modern miracle. “Full line rate to all instances without blowing up the top of rack switches” is unheard of in data center environments, and an entire category of problems around instance affinity and careful topology considerations are suddenly complete non-issues.

I can get multi-gigabit speeds within an AZ and amazing network performance. This is TERRIFIC for my network-bound applications that are sensitive to latency and jitter.

But what if my requirements relax to “I want this data over here to wind up over there, preferably by next Tuesday?” There’s no way to opt out of the amazing AWS network; I’m confined to accepting that top tier performance and its Rube Goldberg pricing mechanism.

I wish I wasn’t.

There is precedent

Recently, EFS announced a one-zone tier that traded multi-AZ durability for cost. (Note that the SLA remains the same for one-zone data storage services; disasters are clearly not factored into the SLA calculation. This may be surprising for some of you to learn…) It’s now less expensive to store data on the service, at the cost of being less durable. For many workloads, that’s just fine.

The same is true (and has been for a while) for S3’s Infrequent Access One Zone storage tier.

This feels like it’s working backwards

Most AWS features are about enhancements—better durability and new features, which often come with additional cost considerations. It’s a rare service that steps forward as EFS recently did with a “hey, we made something objectively worse, but less expensive.”

For an awful lot of use cases, this is exactly what I want as a customer. It’s the kind of thing that no customer outright says during research meetings; they’ll instead complain about the pricing of a service or they’ll ignore the extreme capability of the service because that doesn’t directly benefit their use case.

But it’s the undercurrent behind an awful lot of what previously looked like just a bunch of whining about how expensive the cloud is. I think that you underestimate the value of that feedback when you dismiss it as mere price sensitivity.

I do want to point out that this is true only along certain axes. It’s not universal, and in some cases, you’ll actively alienate customers if you degrade their experience without warning—and then they won’t be your customers anymore.

This is an option that customers should have to actively seek out. Otherwise, you’re simply rebuilding IBM Cloud.