2 Ways to Correct the Financial Times at AWS (So Far)
Amazon's Fastest-Shipping Product Is Now Blog Posts Correcting the Financial Times
I've been watching AWS long enough to develop a feel for when a company's communications shift from "informing" to "coping." We crossed that line somewhere around February 20th, when Amazon published a blog post on aboutamazon.com titled "Correcting the Financial Times report about AWS, Kiro, and AI." Three weeks later (March 11th) they published another one: "Correcting the Financial Times report about recent Amazon.com service incidents and AI."
Two "Correcting the Financial Times" posts in three weeks. That's a faster release cadence than most AWS services manage.
The First One
In December, AWS's Kiro—their AI coding assistant, launched last July to great fanfare and approximately seven active users due to their own capacity shortfalls-executed a CloudFormation teardown-and-replace in a production environment. This took down Cost Explorer in their mainland China partition. The Financial Times reported on it. Amazon's official response: "The brief service interruption they reported on was the result of user error—specifically misconfigured access controls—not AI as the story claims." They also tried to play it off as "one of 39 regions" instead of "Cost Explorer for an entire partition" for no clear reason.
Translation: the AI did exactly what it was told to do, the human just shouldn't have told it to do that. Also the human shouldn't have had the permissions to tell it to do that. Also none of this is the AI's fault. Also why are you even asking about AI?
I covered this in The Register last month. The short version: Amazon chose to torch its own engineers' reputations rather than admit its AI tool might have a role in a production incident. The proposed fix—mandatory peer reviews for AI-generated production changes—requires the very humans Amazon has been laying off by the thousands. It's the corporate equivalent of firing the lifeguards and then blaming the swimmers for drowning.
The Second One
Three weeks pass. A series of outages hit Amazon.com—the retail site, not AWS—over the course of a single week. Supposedly, anyway; I was on vacation hiking the Appalachian trail, where I could not give one solitary toot about what Amazon was up to. God, it was glorious. But yes, there were apparently multiple incidents varying in severity. The Financial Times reports that AI-written code was involved.
Amazon publishes another blog post. This time they want you to know that "only one of the recent incidents involved AI tools in any way, and in that case the cause was unrelated to AI." The single AI-adjacent incident? An engineer followed inaccurate advice from an AI tool that had ingested outdated internal documentation. None—Amazon is very clear about this—involved "AI-written code."
So the AI didn't write bad code. It just read stale docs and gave an engineer bad advice that the engineer followed, which then broke production because there weren't enough safeguards to prevent one person acting on bad information from cascading broadly.
This is… not the defense Amazon thinks it is.
The Pattern
Here's what I find genuinely fascinating. When AWS has an outage—a normal, boring, human-caused outage—you get a terse entry on the Service Health Dashboard and, if it's bad enough, a post-incident summary. That's it. Maybe a COE if you're lucky. AWS has never, to my recollection, published a blog post demanding the Financial Times correct its coverage of a routine service disruption. They own their failures, and they do it well. They're legitimately excellent at this.
But the moment someone suggests AI was involved? Two blog posts in three weeks. Corporate PR in overdrive. Full defensive posture. "Correcting the record." "False claims." "Entirely false."
The outages aren't the story, the reaction to the outages is the story.
Amazon is so terrified of the narrative that AI is causing production incidents that they've developed an entirely new incident response workflow: outage happens, site goes down, engineers fix it, newspaper catches wind, PR team swarms the reporter like a pack of incoherent beetles, Amazon publishes a blog post explaining why AI definitely had nothing to do with it and actually the engineer was the problem, and also why are you even talking about AI, and please stop asking about AI.
The Uncomfortable Math
Here's the chain of events Amazon doesn't want you to put together. Over the last year-plus, Amazon has:
- Laid off thousands of employees across the company
- Pushed aggressively for remaining teams to adopt AI coding tools in the same manner as I pressure my child into eating her vegetables
- Had its CEO tell an all-hands that AI will help AWS reach $600 billion in annual revenue by 2036—double his prior estimate—up from $128.7 billion in 2025
- Experienced a string of production incidents, at least some of which involved AI tools
- Published defensive blog posts insisting the AI wasn't the problem—the humans were
Read that list again. They're cutting the humans, mandating the AI, and then when things break, blaming the humans for not supervising the AI properly.
This is a company that wants the productivity gains of AI-assisted development without accepting any of the associated risk profile. When it works, it's AI-powered innovation. When it breaks, it's human error. The AI is Schrödinger's engineer: simultaneously the future of software development and completely uninvolved in any incident.
What Should Actually Worry You
I don't think AI coding tools are uniquely dangerous. I don't think Amazon's outages are unusual in frequency or severity—every large company has bad weeks. What concerns me is the reflex.
A healthy engineering culture, when confronted with "your AI tool contributed to a production incident," responds with: "Yeah, that tracks. Here's what we're changing so it doesn't happen again." An unhealthy one responds with a condescending press release explaining why the journalist is wrong and probably an idiot, and the human is at fault.
The engineers building and operating these systems are talented people doing hard work under increasingly constrained conditions. They deserve leadership that backs them up when things go sideways, not leadership that throws them under the bus to protect a product launch narrative.
Amazon's AI tools might be great. They might be mediocre. I genuinely don't know—I haven't used Kiro at scale, and the plural of anecdote is not data. But I do know this: the company is spending more energy defending AI's reputation than defending its engineers'. And that tells me everything I need to know about where their priorities are.
The next time something breaks, I'll be watching the aboutamazon.com blog. At this rate, "Correcting the Financial Times" might become a recurring series. Maybe they should just have AI set up an RSS feed.