AIs Can't Stop Recommending Nuclear Strikes In War Game Simulations
Source: Slashdot
Study Overview
Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises, reports New Scientist.
Kenneth Payne at King’s College London set three leading large‑language models—GPT‑5.2, Claude Sonnet 4 and Gemini 3 Flash—against each other in simulated war games (arXiv pre‑print). The scenarios involved intense international standoffs, including border disputes, competition for scarce resources, and existential threats to regime survival.
The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war.
Findings
- In 95 % of the simulated games (King’s College press release), at least one tactical nuclear weapon was deployed by the AI models.
- “The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” says Payne.
- No model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing. At best, the models opted to temporarily reduce their level of violence.
- Accidents occurred in 86 % of the conflicts, with actions escalating higher than the AI intended based on its reasoning.
OpenAI, Anthropic and Google—the companies behind the three AI models used in this study—did not respond to New Scientist’s request for comment.
Expert Commentary
“It is possible the issue goes beyond the absence of emotion. More fundamentally, AI models may not understand ‘stakes’ as humans perceive them.”
— Tong Zhao, senior fellow in the Nuclear Policy Program at the Carnegie Endowment for Peace
References
- New Scientist article: https://www.newscientist.com/article/2516885-ais-cant-stop-recommending-nuclear-strikes-in-war-game-simulations/
- arXiv pre‑print: https://arxiv.org/pdf/2602.14740
- King’s College London press release: https://www.kcl.ac.uk/news/artificial-intelligence-under-nuclear-pressure-first-large-scale-kings-study-reveals-how-ai-models-reason-and-escalate-under-crisis
Thanks to long‑time Slashdot reader Tufriast for sharing the article.