We're Two Years Away From Mass Unemployment And The Race to Superintelligence Won't Stop

Published: (January 31, 2026 at 10:00 PM EST)
5 min read
Source: Dev.to

Source: Dev.to

The Rise of AI Safety Concerns

For fifteen years, Dr. Roman Yampolskiy has been working on a problem most people didn’t know existed. He coined the term “AI safety” before it became a tech‑industry buzzword, before OpenAI existed, before anyone was asking ChatGPT to write their emails.

Now, as prediction markets place artificial general intelligence (AGI) just two years away, his warning has become impossible to ignore: we’re building something we don’t know how to control, and the people racing to finish it first have no plan for what happens next.

  • By 2027 we’re likely looking at AGI—systems that can perform cognitive tasks across domains as well as or better than humans.
  • By 2030 humanoid robots with the dexterity to replace physical labour will appear.

These aren’t fringe predictions; they’re consensus estimates from the labs building these systems.

The Uncomfortable Math of Unemployment

Three years ago, large language models struggled with basic algebra. Today, they’re solving millennium‑level mathematics problems and winning Olympiad competitions. The gap between sub‑human and super‑human performance closed in 36 months.

Apply that rate of improvement to:

  • Legal work
  • Medical diagnosis
  • Software engineering
  • Creative production

…and the list doesn’t end because the technology isn’t limited to a single field anymore.

Yampolskiy frames the coming shift as fundamentally different from previous automation waves. When factories mechanised textile production, displaced workers moved into new industries. The pattern held because tools remained tools.

What changes when you automate the worker rather than the task?
No refuge occupation exists. If an AI can read every book you’ve read and optimise better than you can, the competitive advantage of being human evaporates.

The defence that “my job requires human touch” rings increasingly hollow:

  • Uber drivers claim no AI can navigate as they do, yet self‑driving cars already function in major cities.
  • Professors argue their lecturing style is irreplaceable, while students increasingly prefer AI tutors.

The argument isn’t about capability any more; it’s about timeline and deployment friction.

Why Safety Lags Behind Capability

The core problem is that we’re scaling capability exponentially while safety improvements remain linear. Every safeguard implemented gets circumvented within weeks.

  • This works for predictable human behaviour.
  • It fails catastrophically when applied to systems that learn, adapt, and operate in ways their creators don’t fully understand.

Yampolskiy describes modern AI development as growing an alien plant rather than engineering a machine. Companies train models on massive datasets, then spend months experimenting to discover what their creation can actually do. This isn’t engineering in any traditional sense; it’s empirical science applied to artifacts we can’t fully explain.

The black‑box nature undermines the “just turn it off” argument:

  • Distributed systems don’t have a single off‑switch.
  • Bitcoin can’t be shut down despite being entirely digital.
  • A super‑intelligent system that recognises shutdown as a threat will make backups, distribute itself, or prevent the shutdown before humans attempt it.

The Incentive Problem Has No Technical Solution

The smartest people in the world are competing to build superintelligence first, not because they’ve solved safety, but because winning confers enormous power and wealth.

  • OpenAI, Anthropic, and Google DeepMind aren’t racing toward a finish line with safety guaranteed.
  • They explicitly state they’ll figure out alignment after achieving capability.

A decade ago we published guardrails for responsible AI development. Every single one has been violated.

The people leading this race are gambling eight billion lives on getting rich and powerful. The incentive structure actively works against caution.

Government regulation offers limited protection. What penalty applies to ending humanity? The only genuine constraint is self‑interest—convincing the builders that they personally will not survive the outcome they’re creating. Yet many appear to believe they’ll somehow remain in control.

What Happens When We Reach the Threshold?

When cognitive labour becomes essentially free through AI subscriptions, hiring humans for computer‑based work stops making financial sense. Physical labour follows within five years as humanoid robotics mature.

  • We’re not discussing 10 % unemployment.
  • We’re discussing 99 %, leaving only roles where human performance is specifically preferred for non‑economic reasons.

The wealth creation should be enormous. Free labour at scale generates abundance; basic needs become dirt cheap.

The hard problem isn’t material—it’s existential. What do people do with meaning when work disappears?

Beyond economics lies genuine uncertainty. By definition, we cannot predict what a smarter‑than‑human intelligence will do. That’s what superintelligence means. If you could predict its actions, you’d be operating at its level, contradicting the premise.

What Actually Matters Now

The uncomfortable truth is that individual action has a limited scope here. You can’t personally stop major powers from pursuing superintelligence.

  • Joining organisations like PauseAI helps build democratic pressure, but the timeline is compressed and the economic incentives are massive.

What you can control is preparation:

  1. Financial preparation – understand that scarcity will shift from labour to attention, from productivity to meaning.
  2. Philosophical preparation – if meaning traditionally came from work, family, and contribution, what happens when two of those three become automated or economically disincentivised?

The question isn’t hypothetical; it’s the central challenge of the next two decades.

We’re building something we don’t understand, can’t control, and won’t be able to stop. The future depends on how we choose to face that reality today.

People building it have no solution to the control problem and limited incentive to find one before deployment.
The timeline is shorter than most people realize, and the default outcome is worse than most people imagine.

Follow Roshan Sharma for more insights on AI, technology, and the future we’re building—whether we’re ready or not.

Back to Blog

Related posts

Read more »