AI researcher says 'world is in peril' and quits to study poetry
Source: BBC Technology
An AI safety researcher has quit US firm Anthropic with a cryptic warning that the “world is in peril.” In his resignation letter shared on X, Mrinank Sharma said he was leaving amid concerns about AI, bioweapons, and broader global crises, and that he would pursue a poetry degree and writing while moving back to the UK to “become invisible.”

Resignation letter
Sharma’s letter (see the original on X) highlighted his work on:
- Investigating why generative AI systems “suck up to users.”
- Combating AI‑assisted bioterrorism risks.
- Researching how AI assistants could make us less human.
He wrote:
“The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.”
He added that he had “repeatedly seen how hard it is to truly let our values govern our actions,” noting constant pressure at Anthropic to set aside what matters most. Sharma concluded he would move back to the UK and “let myself become invisible for a period of time.”
Anthropic background
Anthropic, founded in 2021 by a breakaway team of early OpenAI employees, positions itself as a public‑benefit corporation focused on securing AI’s benefits and mitigating its risks. The company is best known for its Claude chatbot and has released a series of commercials criticizing OpenAI’s decision to include advertisements for some users.
Key points about Anthropic:
- Emphasizes safety for advanced frontier systems, warning against misalignment with human values and misuse in conflict.
- Published reports on the safety of its products, including a note that its technology had been “weaponised” by hackers for sophisticated cyber attacks.
- Faced scrutiny for its practices; in 2025 it agreed to pay $1.5 bn (£1.1 bn) to settle a class‑action lawsuit alleging the company stole authors’ works to train its AI models.
Anthropic’s recent commercial targeted OpenAI’s move to run ads in ChatGPT, echoing broader industry debates about advertising and user manipulation.
Industry concerns
Former OpenAI researcher Zoe Hiztig, writing in the New York Times, expressed “deep reservations” about OpenAI’s advertising strategy, warning that:
“People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
Hiztig suggested that an erosion of OpenAI’s principles to maximise engagement might already be underway, and that this could accelerate if advertising practices do not align with the company’s stated values.
BBC News has approached OpenAI for a response.