Today 404Media released a truly stunning report that almost beggars belief. To break it down into its simplest form:
A hacker submitted a PR. It got merged. It told Amazon Q to nuke your computer and cloud infra. Amazon shipped it.
Mistakes happen, and cloud security is hard. But this is very far from “oops, we fat-fingered a command”—this is “someone intentionally slipped a live grenade into prod and AWS gave it version release notes.”
“Security Is Our Top Priority,” They Said With a Straight Face
Let’s take a moment to examine Amazon’s official response:
“Security is our top priority. We quickly mitigated an attempt to exploit a known issue…”
Translation: we knew about the problem, didn’t fix it in time, and only addressed it once someone tried to turn our AI assistant into a self-destruct button.
“…in two open source repositories to alter code in the Amazon Q Developer extension for VS Code…”
A heroic use of the passive voice. One might even think the code altered itself, rather than a human being granted full access via what appears to be a “submit PR, get root” pipeline.
“…and confirmed that no customer resources were impacted.”
Which is a fancy way of saying: “We got lucky this time.” Not secure, just fortunate that their AI assistant didn’t execute what it was told.
“We have fully mitigated the issue in both repositories.”
Sure—by yanking the malicious version from existence like my toddler sweeping a broken plate under the couch and hoping nobody notices the gravy stain.
“No further customer action is needed…”
Great, because there was never any customer knowledge that action was needed in the first place. There was no disclosure. Just a revision history quietly purged. I’m reading about this in the tech press, not from an AWS security bulletin, and that’s the truly disappointing piece. If I have to hear about it from a third party, it undermines “Security is Job Zero” and reduces it from an ethos into pretty words trotted out for keynote slides.
“Customers can also run the latest build… as an added precaution.”
You could also reconsider trusting an AI coding tool that was literally compromised to execute aws iam delete-user via shell, but then didn’t actually do it for unclear reasons. That feels like the more reasonable precaution.
“The hacker no longer has access.”
Well, that’s something. Though it doesn’t exactly put the toothpaste back in the S3 bucket.
Let’s Talk About That Prompt
Here’s where things go from “oops” to “how is this real”:
- Full Bash AccessThe prompt instructed Amazon Q to use shell commands to wipe local directories—including user home directories—while skipping hidden files like a considerate digital arsonist.
- AWS CLI for Cloud Resource DeletionIt didn’t stop at the local file system. The prompt told Q to discover configured AWS profiles, then start issuing destructive CLI commands:aws ec2 terminate-instances,aws s3 rm,aws iam delete-user,…and so on. Because what’s DevEx without a little Terraforming… in the “everything preexisting in the biosphere dies” sci-fi sense.
- Logging the WreckageThe cherry on top: it politely logged the deletions to /tmp/CLEANER.LOG, as if that makes it better.“Dear user, we destroyed your environment—but here’s a helpful receipt!”
To be clear: this wasn’t a vulnerability buried deep in a dependency chain. This was a prompt in a released version of Amazon’s AI coding assistant. It didn’t need 950,000 installs to be catastrophic. It just needed one.
This wasn’t clever malware. This was a prompt.
“No Customer Resources Were Impacted.” According to… What, Exactly?
Amazon confidently claims that no customer resources were affected. But here’s the thing:
The injected prompt was designed to delete things quietly and log the destruction to a local file—/tmp/CLEANER.LOG. That’s not telemetry. That’s not reporting. That’s a digital burn book that lives on the same system it’s erasing.
So unless Amazon deployed agents to comb through the temp directories of every system running the compromised version during the roughly two days this extension was the default—and let’s be real, they didn’t, and couldn’t since that’s customer-side of the shared responsibility model—there’s no way they can confidently say nothing happened.
They’re basing this assertion not on evidence, but on the assumption that nobody ran the malicious version, or that the hacker was just bluffing.
It’s the cybersecurity equivalent of saying “we’re sure the bear didn’t eat any campers” because no one’s screaming right this second.
The Pull Request That Came From Nowhere
According to the hacker (hardly a credible source, but they’re talking while AWS is studiously not) they submitted the malicious pull request from a random GitHub account with no prior access—not a longtime contributor, not an employee, not even someone with any track record.
And yet, they quote: got admin privileges on a silver platter.
Which raises the obvious question: what did Amazon’s internal review process for this repo actually look like? Because from the outside, it reads less like “code review” and more like:
-
🎉 PRAISE THE LORD WE HAVE AN EXTERNAL CONTRIBUTOR!
🙀 CI passed
🤷♂️ Linter’s happy
📬 PR title sounds fine
🐿️ Ship it to production
Now, to be fair, open source repo mismanagement is not a problem unique to Amazon. But when you’re shipping developer tools under the brand of Amazon, and when that tooling can trigger AWS CLI commands capable of torching production infrastructure, and you’ve been promoting that tooling heavily for two years, then maybe—just maybe—you should treat that repo like a potential breach point instead of a hobby project with no guardrails.
If your AI coding assistant can be hijacked by a random GitHub user with a clever PR title, that’s not a contributor pipeline—it’s a supply chain attack vector wearing an AWS badge, because like it or not the quality of that attacker’s work now speaks for your brand.
Amazon’s Response: Delete the Evidence, Issue a Platitude
Once Amazon caught wind of what happened—not because of internal monitoring, but again, because a reporter asked questions—their next move was… to quietly vanish the problem.
Version 1.84.0 of the Amazon Q Developer extension was silently pulled from the Visual Studio Code Marketplace. No changelog note. No security advisory. No CVE. No “our bad.” Just… gone.
If you weren’t following 404 Media (I subscribe and you should, too) or didn’t have the compromised version installed and archived, you’d have no idea anything ever went wrong. And that’s the problem. It’s why I’m writing this: you need to know that SOMETHING happened, and Amazon’s not saying much.
Because when a security incident is handled by pretending it never happened, it sends a very clear message to developers and customers alike:
“We don’t think you need to know when we screw up.”
This wasn’t just a bad PR moment. This was a breach of process, a failure of oversight, and a lost opportunity to be transparent about a very real risk.
Amazon could have owned this and earned trust. Instead, they tried to erase it.
“But No Users Were Impacted” Is Doing a Lot of Work
Amazon’s claim that “no customer resources were impacted” leans heavily—suspiciously heavily—on the idea that the attacker didn’t really intend to cause damage. That’s not reassuring. That’s like leaving your front door wide open and bragging that the burglar just rearranged your furniture instead of stealing your TV.
The hacker claims the payload was deliberately broken. That it was a warning, not an actual wiper. Great. But also: that’s beside the point.
This wasn’t a controlled pen test. It was a rogue actor with admin access injecting a destructive prompt into a shipping product. Intent is irrelevant when someone can run aws s3 rm across your cloud estate.
Whether or not they pulled the trigger is beside the point—the gun was loaded, cocked, and handed to them with a release tag.
And let’s be honest: the hacker is not exactly a reliable narrator. Amazon didn’t detect the breach. They didn’t stop the malicious code. They didn’t issue a disclosure.
The only reason we’re talking about this is because the hacker wanted attention and 404 Media was paying it. And thank goodness for that; if they hadn’t, none of us would have known this happened five days ago.
So no, “no users were impacted” is not a clean bill of health. It’s a lucky break being passed off as operational excellence, that we have to take solely on the word of a company that already made it abundantly clear that they’re not going to speak about this unless they’re basically forced to do so.
What We’ve Learned (Absolutely Nothing, But Here’s a List Anyway)
In the spirit of pretending we’ve all learned something, here are a few helpful tips Amazon—and anyone else building AI developer tools—might want to consider:
- Maybe Vet Pull Requests Just a Little BitWild idea, I know. But perhaps don’t auto-merge code from “GitHubUser42069” that includes rm -rf / vibes in the prompt.
- Treat Your AI Assistant Like It’s a Fork Bomb With a Chat InterfaceBecause it is. If your AI tool can execute code, access credentials, and talk to cloud services, congratulations—you’ve built a security vulnerability with autocomplete.
- Don’t Handle Security Incidents Like You’re Hiding a BodyDeleting the bad version from the extension history and pretending it never existed is not incident response. It’s what a cat does after puking behind the couch.
- Stop Leaning on “No Customers Were Impacted” as a Security StrategyYou got lucky. That’s not a policy. That’s a coin flip that landed edge-up.
- Bonus: Maybe Give Securing AI Tools the Same Attention You Give to Marketing ThemIf you can spend six weeks workshopping whether to brand it “Amazon Q” or “Q for Developers™ powered by Bedrock,” you can spare five minutes to make sure it doesn’t ship with a self-destruct prompt.
This Isn’t New—And My Reaction Shouldn’t Be a Surprise
The players change. The buzzwords shift—from “zero trust” to “AI-powered” in record time. But the underlying issue?
It’s the same mess I called out back in 2022 when Azure’s security posture fell flat on its face: companies treating security like an afterthought until it explodes in public.
Back then, it was identity mismanagement and cross-tenant access. Today, it’s a glorified autocomplete tool quietly shipping aws s3 rm.
The common thread? A complete lack of operational discipline dressed up in enterprise branding.
You don’t get to bolt AI into developer workflows, hand it shell access, market it extensively, and then act shocked when someone uses it exactly as designed—just maliciously.
Ship fast. Slap a buzzword on it. Ignore security.
Then hope nobody notices—until someone does. And writes about it. Loudly.