An AI coding bot took down Amazon Web Services
Source: Ars Technica
Incident Overview
Amazon stated that both recent disruptions were caused by user error, not by AI. The company emphasized that it has not seen evidence that mistakes are more common when using AI tools.
December Incident
- Described as an “extremely limited event” affecting only a single service in parts of mainland China.
- The incident was linked to the Kiro coding assistant, which by default “requests authorisation before taking any action.”
- The engineer involved had broader permissions than expected, indicating a user access control issue rather than an AI autonomy problem.
- No second‑person approval was required for the changes, which deviates from the usual process.
Second Incident
- Did not impact any “customer‑facing AWS service.”
- Similar to the December case, the disruption stemmed from user error rather than AI malfunction.
AI Tools and Permissions
- AWS treats its AI tools as extensions of operators, granting them the same permissions.
- In both incidents, engineers could make changes without the typical peer‑review safeguards.
- Following the December incident, AWS introduced numerous safeguards, including mandatory peer review and staff training.
Kiro and Amazon Q Developer
- Kiro was launched in July as a coding assistant intended to move beyond “vibe coding” and generate code from specifications.
- The group previously relied on Amazon Q Developer, an AI‑enabled chatbot that assists engineers in writing code; this tool was involved in an earlier outage.
- Amazon reports strong customer growth for Kiro and aims for 80 % of developers to use AI for coding tasks at least once a week, tracking adoption closely.
Employee Sentiment
- Some Amazon employees remain skeptical about the utility of AI tools for most of their work due to the risk of error.
- Despite skepticism, the company continues to promote AI adoption for efficiency gains.
© 2026 The Financial Times Ltd. All rights reserved.