AI safety leader says 'world is in peril' and quits to study poetry

Published: (February 12, 2026 at 11:37 PM EST)
4 min read

Source: BBC Technology

AI safety leader says ‘world is in peril’ and quits to study poetry

An AI safety researcher has quit US firm Anthropic with a cryptic warning that the “world is in peril”.

In his resignation letter shared on X, Mrinank Sharma told the firm he was leaving amid concerns about AI, bioweapons and the state of the wider world. He said he would instead look to pursue writing and studying poetry, and move back to the UK to “become invisible”.

Anthropic, best known for its Claude chatbot, had released a series of commercials aimed at OpenAI, criticising the company’s move to include adverts for some users. The company, formed in 2021 by a breakaway team of early OpenAI employees, positions itself as having a more safety‑oriented approach to AI research compared with its rivals. Sharma led a team there that researched AI safeguards.

He said in his resignation letter his contributions included investigating why generative AI systems suck up to users, combatting AI‑assisted bioterrorism risks and researching “how AI assistants could make us less human”. Despite enjoying his time at the company, he wrote that “the time has come to move on”.

“The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment,” Sharma wrote.
He added that he had “repeatedly seen how hard it is to truly let our values govern our actions”, noting that Anthropic “constantly faces pressures to set aside what matters most”.

Sharma will pursue a poetry degree and writing, moving back to the UK and “letting myself become invisible for a period of time”.


Eroding principles

Anthropic calls itself a “public benefit corporation dedicated to securing [AI’s] benefits and mitigating its risks”. It focuses on preventing risks posed by more advanced frontier systems, such as misalignment with human values, misuse in conflict, or becoming “too powerful” (BBC).

The firm has released reports on the safety of its own products, including when it said its technology had been “weaponised” by hackers to carry out sophisticated cyber attacks (BBC). It has also faced scrutiny: in 2025 it agreed to pay $1.5 bn (£1.1 bn) to settle a class‑action lawsuit filed by authors who claimed the company stole their work to train its AI models.

Like OpenAI, Anthropic seeks to seize on the technology’s benefits through products such as its Claude chatbot. It recently released a commercial that criticised OpenAI’s move to start running ads in ChatGPT. OpenAI CEO Sam Altman has previously said he hates ads and would use them only as a “last resort”.


Watch: Zoe Hitzig on why she quit

A former OpenAI researcher who resigned this week, in part due to fears about advertising on ChatGPT, told BBC Newsnight she feels “really nervous about working in the industry”. Zoe Hitzig said her concerns stem from the possible psychosocial impacts of a “new type of social interaction” that are not yet understood.

She noted “early warning signs” that dependence on AI tools were “worrisome” and could “reinforce certain kinds of delusions” as well as negatively impact users’ mental health.

“Creating an economic engine that profits from encouraging these kinds of new relationships before we understand them is really dangerous,” she continued.
“We saw what happened with social media… there’s still time to set up the social institutions, the forms of regulation that can actually govern this.” She called it a “critical moment”.

Responding to BBC News, an OpenAI spokesperson pointed to the firm’s principles: “Our mission is to ensure AGI benefits all of humanity; our pursuit of advertising is always in support of that mission and making AI more accessible.” They added: “We keep your conversations with ChatGPT private from advertisers, and we never sell your data to advertisers.”

0 views
Back to Blog

Related posts

Read more »

Is safety is ‘dead’ at xAI?

In Brief Elon Musk is “actively” working to make xAI’s Grok chatbot “more unhinged,” according to a former employee who spoke to The Verge about recent departu...

Why I don't think AGI is imminent

Article URL: https://dlants.me/agi-not-imminent.html Comments URL: https://news.ycombinator.com/item?id=47028923 Points: 63 Comments: 140...

I’m joining OpenAI

TL;DR I’m joining OpenAI to work on bringing agents to everyone. OpenClawhttps://openclaw.ai/ will move to a foundation and stay open and independent. Recent d...