A roadmap for AI, if anyone will listen

Published: (March 8, 2026 at 01:05 AM EST)
4 min read
Source: TechCrunch

Source: TechCrunch

While Washington’s breakup with Anthropic exposed the complete lack of any coherent rules governing artificial intelligence, a bipartisan coalition of thinkers has assembled something the government has so far declined to produce: a framework for what responsible AI development should actually look like.

The Pro‑Human Declaration was finalized before last week’s Pentagon‑Anthropic standoff, but the collision of the two events wasn’t lost on anyone involved.

“There’s something quite remarkable that has happened in America just in the last four months,” said MIT physicist and AI researcher Max Tegmark, who helped organize the effort, in a conversation with this editor. “Polling suddenly [is showing] that 95 % of all Americans oppose an unregulated race to superintelligence.”
— The AI Safety Showdown with Max Tegmark

The Pro‑Human Declaration

Background and Motivation

The newly published document, signed by hundreds of experts, former officials, and public figures, opens with the observation that humanity is at a fork in the road:

  • The “race to replace” – humans are supplanted first as workers, then as decision‑makers, as power accrues to unaccountable institutions and their machines.
  • The alternative – AI that massively expands human potential.

Key Pillars

The declaration outlines five pillars that must be upheld for the latter scenario to succeed:

  1. Keeping humans in charge
  2. Avoiding the concentration of power
  3. Protecting the human experience
  4. Preserving individual liberty
  5. Holding AI companies legally accountable

Among its more muscular provisions are:

  • An outright prohibition on superintelligence development until there is scientific consensus it can be done safely and genuine democratic buy‑in.
  • Mandatory off‑switches on powerful systems.
  • A ban on architectures capable of self‑replication, autonomous self‑improvement, or resistance to shutdown.

Recent Government Actions

Pentagon‑Anthropic Dispute

On the last Friday in February, Defense Secretary Pete Hegseth designated Anthropic—a company whose AI already runs on classified military platforms—as a “supply chain risk” after it refused to grant the Pentagon unlimited use of its technology. The label is ordinarily reserved for firms with ties to China.

OpenAI Contract

Hours later, OpenAI cut its own deal with the Defense Department, a move that legal experts say will be difficult to enforce in any meaningful way. The episode highlights how costly Congressional inaction on AI has become.

“This is not just some dispute over a contract. This is the first conversation we have had as a country about control over AI systems,” Dean Ball, senior fellow at the Foundation for American Innovation, told The New York Times.
— NYT article (Mar 7 2026)

Child Safety and Pre‑deployment Testing

Analogy and Rationale

Tegmark likens AI regulation to drug safety oversight:

“You never have to worry that some drug company is going to release a drug that causes massive harm before people have figured out how to make it safe, because the FDA won’t allow them to release anything until it’s safe enough.”

Proposed Testing Requirements

The declaration calls for mandatory pre‑deployment testing of AI products—particularly chatbots and companion apps aimed at younger users—covering risks such as:

  • Increased suicidal ideation
  • Exacerbation of mental health conditions
  • Emotional manipulation

“If some creepy old man is texting an 11‑year‑old pretending to be a young girl and trying to persuade this boy to commit suicide, the guy can go to jail for that. We already have laws. It’s illegal. So why is it different if a machine does it?” — Tegmark

Tegmark argues that once pre‑release testing becomes standard for children’s products, the scope will naturally expand to other high‑risk areas (e.g., preventing AI‑assisted bioweapon creation or safeguarding against attempts to overthrow governments).

Broad Support

The declaration has attracted an unusually diverse set of signatories, including:

  • Former Trump advisor Steve Bannon
  • Former President Obama’s National Security Advisor Susan Rice
  • Former Joint Chiefs Chairman Mike Mullen
  • Progressive faith leaders
  • Hundreds of AI researchers, ethicists, and industry executives

“What they agree on, of course, is that they’re all human. If it’s going to come down to whether we want a future for humans or a future for machines, of course they’re going to be on the same side.” — Tegmark

0 views
Back to Blog

Related posts

Read more »