AI CEOs Worry the Government Will Nationalize AI

Published: (March 8, 2026 at 07:34 AM EDT)
3 min read
Source: Slashdot

Source: Slashdot

Palantir CEO’s warning

Palantir’s CEO was blunt on X:

“If Silicon Valley believes we are going to take away everyone’s white‑collar job… and you’re going to screw the military — if you don’t think that’s going to lead to the nationalization of our technology, you’re retarded…”
Source

Sam Altman’s thoughts on government‑run AI

OpenAI’s Sam Altman has publicly mused about the possibility of a government‑led AI effort. In a recent X post he said:

“It has seemed to me for a long time it might be better if building AGI were a government project.”
Altman’s tweet

Altman added that while a full nationalization “doesn’t seem super likely on the current trajectory,” a close partnership between governments and AI companies is “super important.” He acknowledged feeling “the threat of attempted nationalization” when answering questions on X.

Reference: The New Stack – OpenAI‑Defense Department debate

Government involvement and the Defense Production Act

Fortune notes that many strategic breakthroughs—from the Manhattan Project to the space race—were government‑funded and directed. The magazine reports that the Department of Defense recently threatened Anthropic with the Defense Production Act (DPA), which allows the president to designate “critical and strategic” goods and compel businesses to accept government contracts. Fortune characterizes this as a “soft nationalization of Anthropic’s production pipeline.”

How AI companies might work with the government

During an AMA on X, OpenAI’s Head of National Security Partnerships, Katherine Mulligan, was asked whether a future AGI that passed its own Turing test would be compelled to grant the Defense Department access under existing contracts. Mulligan responded:

“No. At our current moment in time, we control which models we deploy.”

Industry pushback

A joint open letter titled “We Will Not Be Divided”—signed by 100 OpenAI employees and 856 Google employees—urged their companies to refuse the use of their models for domestic mass surveillance and autonomous lethal systems. The letter can be viewed here: https://notdivided.org/

Historical analogies

Adafruit’s Managing Director Phillip Torrone draws parallels between today’s AI landscape and the Manhattan Project, noting how scientists who built the atomic bomb tried to set conditions on its use, only to be pressured by the government to back down. He compares this to the Pentagon’s designation of Anthropic as a “supply chain risk” before offering OpenAI a contract with similar red lines.

Cultural reference

Anthropic CEO Dario Amodei frequently recommends the Pulitzer‑Prize‑winning 1986 book The Making of the Atomic Bomb as essential reading: https://amzn.to/4rl3eOl

0 views
Back to Blog

Related posts

Read more »