The biggest mistakes we ever find in software are written by the worst developer imaginable: the past version of yourself.

“If I’d known then what I know now, I’d have built that thing completely differently” is a common refrain, and not just for individual coders.

So it goes for cloud providers, who are forced to live with the consequences of their past development decisions as well. Hindsight makes it easy to draw conclusions based upon data and behaviors that weren’t apparent at the time that they built them. Customer uptake might not have been what they wanted it to be, or the ways customers used things didn’t align with their expectations.

The question for cloud providers then becomes: “What do we do with the products we already built?”

There are two obvious paths, one blazed by Google Cloud and the other by AWS.

1. The Google Cloud approach

As captured in Steve Yegge’s infamous rant, Google has an internal deprecation strategy that can be abstracted away from most developers via rigorous automation that doesn’t exist externally. The Killed By Google meme wouldn’t get nearly the traction that it does if it weren’t painfully accurate.

Google recently attempted to address this with the announcement of Enterprise APIs that offer a firmer basis for deprecation timelines. There are two problems here that I can see.

The first is that not everything under the Google Cloud umbrella of services is included. That means that customers who care about not having the rug yanked out from under them need to reference a list religiously to make sure that they’re not one “exciting” email away from having to rebuild something important.

The second is that Googlers seem to somehow be expecting praise from the general internet for doing something they should have been doing all along. You don’t get points for stopping bad behavior; you simply stop digging the hole deeper. While Enterprise APIs are a welcome change to be sure, Google shouldn’t have had to get anywhere close to this point before they became necessary.

Despite what Google wishes were true, every single Google Cloud customer I’ve ever spoken with has brought up the Google graveyard when we talk about their choice of cloud providers. Effectively no one who isn’t a current or past Googler finds fault with the accusation of unplanned product retirements whenever it’s raised.

2. The AWS approach

Meanwhile, AWS has the same contractual terms around deprecations as Google Cloud does, except that nobody ever really cares about their specifics. Over 15 years of history have demonstrated that things generally don’t get deprecated.

The narrative that AWS “doesn’t deprecate things” isn’t strictly true, but it’s close enough to accurate that you could be forgiven for assuming that it was. For example, AWS recently deprecated EC2-Classic, which demonstrates a couple things.

First, AWS confuses the heck out of customers when it does deprecate. With EC2-Classic, they’re deprecating a networking mode for EC2 that you’ve had to explicitly request for your account since the end of 2013. However, the narrative in a few places is more akin to what you’d expect if they were deprecating all of EC2 (which would be fantastically foolish of them).

Second, AWS really, really doesn’t like to retire programs. It’s so averse to actual deprecation that the current strategy is more or less to force a reboot of all EC2-Classic instances. When those instances come back up, they’ll be in a specially created “default” VPC that interacts the same way that EC2 Classic does, through the same APIs. Note, this isn’t actually happening for some time yet.

The benefit to AWS’ overall approach is that something you were doing in 2008 will still work the same way today, via the same API calls. The drawback to AWS’ approach is that something you were doing in 2008 will still work the same way today, via the same API calls. While this does lead to cruft and is antithetical to “proper engineering,” it’s hard to deny that it aligns rather well with the approach that most companies take to their environments.

There’s a better way to approach cloud changes

The Google Cloud approach leads to a constant feeling of being on a treadmill. You’re subject to having your plan disrupted as you suddenly need to shift people around to reimplement something before the platform it’s built on stops working.

The AWS approach clearly builds a solid base of customer trust, but it also leads to vast service sprawl. New customers find themselves lost. Longtime customers don’t feel the need to “keep up,” as everything they build will continue working until the earth crashes into the sun.

At this point, I’m not really sure which path is “better.” The challenge for cloud providers is to find a third path that avoids both Google Cloud’s and AWS’ pitfalls. Both existing options force customers to rigorously check the dates on various blog posts to ensure that they’re not aligning with an obsolete best practice. If you’re a business that’s not serving your customers well, what are you doing?