What is Claude Mythos and what risks does it pose?

Published: (April 17, 2026 at 09:41 AM EDT)
5 min read

Source: BBC Technology

5 hours ago
Liv McMahon, Technology reporter and
Joe Tidy, Cyber correspondent, BBC World Service

Reuters A smartphone display showing the Anthropic logo in black letters on an all‑white background, laid on a laptop keyboard lit in pink and purple
Reuters

Overview

In recent weeks, the AI world has been abuzz following claims made by leading firm Anthropic regarding its new model, Claude Mythos. The company says the tool can outperform humans at some hacking and cyber‑security tasks, prompting discussions by regulators, legislators and financial institutions about the dangers it could pose to digital services.

Several tech giants have been given access to Mythos via an initiative called Project Glasswing, designed to strengthen resilience to Mythos itself. Others point out that it is in Anthropic’s interest to suggest its tool has never‑seen‑before capabilities, meaning—​as ever with AI—the job of distinguishing between justified claims and hype can be tricky.

What is Claude Mythos?

Mythos is one of Anthropic’s latest models, developed as part of its broader AI system called Claude. Claude encompasses the company’s AI assistant and family of models, rivalling OpenAI’s ChatGPT and Google’s Gemini.

  • Revealed by Anthropic in early April as “Mythos Preview”.
  • Researchers who test how AI models handle particular requests or tasks—known as “red‑teams”—said in a report that Mythos was “strikingly capable at computer security tasks”.
  • The tool could locate dormant bugs lurking in decades‑old code and easily exploit them.

Rather than make it widely available to Claude users, Anthropic gave 12 tech companies access via Project Glasswing, which it described as “an effort to secure the world’s most critical software”. The partners include:

  • Amazon Web Services (cloud computing)
  • Apple, Microsoft and Google (device manufacturers)
  • Nvidia and Broadcom (chip‑makers)
  • Crowdstrike, whose faulty software update caused a major global outage in July 2024, and more than 40 organisations responsible for critical software.

In a video released alongside Project Glasswing’s launch, Anthropic boss Dario Amodei said the company had offered to work with U.S. government officials to “help defend against the risk of these models”.

Why are there concerns?

Anthropic says that during tests the model was highly skilled at cyber‑security and hacking tasks, outperforming humans.

“Mythos Preview has already found thousands of high‑severity vulnerabilities, including some in every major operating system and web browser,” Anthropic claimed on 7 April.
“Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely.”

The model could locate—without much oversight—critical bugs in old systems, including one vulnerability that had been present for 27 years, and suggest ways to exploit them.

Reuters People march through San Francisco, USA, carrying placards against AI. They are smiling on a sunny day and one man wears a high‑visibility vest.
Reuters – Public fears over AI capabilities led to a protest in San Francisco in March.

Key reactions

  • Canadian finance minister François‑Philippe Champagne told the BBC Mythos had been discussed at an International Monetary Fund (IMF) meeting in Washington DC, calling it “serious enough to warrant the attention of all the finance ministers” and describing the tech as an “unknown unknown”.
  • Bank of England governor Andrew Bailey said regulators are “looking very carefully now what this latest AI development could mean for the risk of cyber crime.”
  • The EU is also in discussions with Anthropic about concerns around Mythos.

Listen: The AI model that’s ‘too powerful’ to be released to the public

What have cyber experts said about it?

Ciaran Martin, former head of the UK’s National Cyber Security Centre, told the BBC that the claim Mythos could unearth critical vulnerabilities much more quickly than other AI models had “really shaken people”.

“Even with existing weaknesses that we know about, organisations might not have patched against, it’s just a really good hacker,” he said.

Many independent cyber‑security analysts have not yet been able to test Mythos themselves and remain sceptical about its performance. The UK’s AI Safety Institute concluded that while Mythos is a very powerful model, its biggest threat would be against poorly defended, vulnerable systems:

“We cannot say for sure whether Mythos Preview would be able to attack well‑defended systems,” the institute’s researchers wrote.
(Source: https://www.aisi.gov.uk/blog/our-evaluation-of-claude-mythos-previews-cyber-capabilities)

Should we be worried about it?

Fears relating to AI are nothing new. New models and tools appear constantly, often accompanied by promises to revolutionise our lives—​for better or worse. The mix of fear and excitement has become a hallmark of the sector’s marketing strategies.

In the case of Mythos, we still do not know enough to determine whether hopes or fears are justified, or simply a reflection of industry hype.

Reuters CEO of Anthropic Dario Amodei addresses the gathering at the AI Impact Summit, New Delhi.
Reuters – Anthropic CEO Dario Amodei has warned against misuse of the company’s products before.

According to the National Cyber Security Centre (NCSC), the most important thing now is not to panic but to focus on getting basic cyber‑security right. Most hackers do not need super‑AI tools; simpler attacks often suffice.

“For some this is an apocalyptic event, for others it seems a lot of hype,” Martin told the BBC.
“In the medium‑term, there’s an opportunity to use these tools to fix a lot of the underlying vulnerabilities in the internet.”

Tech Decoded promotional banner
Tech Decoded: The world’s biggest tech news in your inbox every Monday.

0 views
Back to Blog

Related posts

Read more »