Anthropic investigating claim of unauthorised access to Mythos AI tool
Source: BBC Technology
Anthropic is investigating a claim that a small group of people gained access to its Claude Mythos model – a cyber‑security tool the company says is too powerful to release publicly.
“We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third‑party vendor environments,” Anthropic said in a statement.
The claim follows a Bloomberg report that users in a private forum managed to access the model without the normal permissions.
Background on Mythos
Mythos is marketed as a tool that can help organisations secure their systems by identifying and exploiting vulnerabilities. Anthropic has released the model to a limited set of tech and financial companies under strict access controls.
- The model is considered “frontier AI,” meaning it is among the most advanced AI systems currently available.
- According to the UK’s top cyber official, advanced AI tools could be a “net positive” if they are secured against misuse.
- There is currently no evidence that malicious actors have obtained the model, and Anthropic says it has not detected any impact on its systems.
Details of the Alleged Access
- Bloomberg reported that the individual who accessed Mythos already had permission to view Anthropic’s AI models through work for a third‑party contractor.
- The group has reportedly been using the model since gaining access, but not for hacking, as they claim they do not want to be detected.
- Raluca Săcenu, CEO of cyber‑security firm Smarttech247, said the breach was “most likely through misuse of access rather than a classic hack.”
- Săcenu added: “When powerful AI tools are accessed or used outside their intended controls, the risk is not just a security incident but the spread of capabilities that could be used for fraud, cyber abuse, or other malicious activity.”

Government and Industry Reactions
National Cyber Security Centre (NCSC)
In a speech at the NCSC’s CyberUK conference, the head of the centre, Richard Horne, emphasized that AI tools can improve security if basic cyber‑security practices are followed.
“As we have seen in the media in recent days, frontier AI is rapidly enabling discovery and exploitation of existing vulnerabilities at scale, illustrating how quickly it will expose where fundamentals of cyber‑security are still to be addressed,” Horne said.
Political Commentary
Security Minister Dan Jarvis urged AI firms to collaborate with the government on what he called a “generational endeavour” to ensure AI is used to protect critical networks from attackers.
International Context
- All the most powerful frontier AI models are developed outside the UK, primarily by companies in the United States and China.
- Consequently, the UK relies on firms like Anthropic for access to tools such as Mythos and has limited control over how these models are built, trained, or released.
- OpenAI offers a comparable cyber‑security model called GPT 5.4 Cyber.
Ongoing Threat Landscape
The NCSC warned that cyber is now “the home front” of defence in the UK, citing recent events such as the Iran attacks and ongoing nation‑state and hacktivist activities from Russia and China.
References
- Bloomberg investigation: https://www.bloomberg.com/news/articles/2026-04-21/anthropic-s-mythos-model-is-being-accessed-by-unauthorized-users
- BBC analysis of AI unease: https://www.bbc.co.uk/news/articles/c2ev24yx4rmo