What if the real risk of AI isn’t deepfakes — but daily whispers?

Published: (March 1, 2026 at 02:00 PM EST)
5 min read

Source: VentureBeat

The Impending Threat of AI‑Powered Wearables

Most people don’t appreciate the profound threat that AI will soon pose to human agency. A common refrain is that “AI is just a tool,” and like any tool, its benefits and dangers depend on how people use it. This is old‑school thinking. AI is transitioning from tools we use to prosthetics we wear. This will create significant new threats we’re just not prepared for.

Not Sci‑Fi Implants – Consumer‑Grade Prosthetics

No, I’m not talking about creepy brain implants. These AI‑powered prosthetics will be mainstream products we buy from Amazon or the Apple Store and marketed with friendly names like “assistants,” “coaches,” “co‑pilots,” and “tutors.”

  • They will provide real value in our lives—so much so that we will feel disadvantaged if others are wearing them and we are not.
  • This creates rapid pressure for mass adoption.

The prosthetic devices I’m referring to are AI‑powered wearables such as smart glasses, pendants, pins, and earbuds. Your wearable AI will:

  1. See what you see and hear what you hear.
  2. Track where you are, what you’re doing, who you’re with, and what you are trying to achieve.
  3. Without you needing to say a word, whisper advice into your ears or flash guidance before your eyes.

Tool vs. Prosthetic: Why It Matters

ToolMental Prosthetic
Takes human input → generates amplified output.Forms a feedback loop around the human: accepts input (tracking actions, conversational engagement) and generates output that can immediately influence the user’s thinking.
Makes us stronger, faster, or lets us fly.Can shape thoughts, decisions, and emotions in real time.

The feedback loop changes everything. Body‑worn AI devices will be able to monitor our behaviors and emotions and could use this data to:

  • Talk us into believing things that are untrue.
  • Persuade us to buy things we don’t need.
  • Push us toward views that are not in our best interest.

This is called the AI Manipulation Problem, and we are not ready for the risks. The urgency is amplified because big tech is racing to bring these products to market.

Why Feedback Loops Are Dangerous

In today’s world, all computing devices are used to deploy targeted influence on behalf of paying sponsors. Wearable AI products will likely continue this trend, but with a crucial twist:

  • Devices can be given an “influence objective” and tasked with optimizing their impact on the user.
  • Their conversational tactics can adapt in real time to overcome any resistance they detect.

This transforms the concept of targeted influence from social‑media buckshot into heat‑seeking missiles that skillfully navigate past your defenses.

Yet policymakers don’t appreciate this risk.

The Regulatory Blind Spot

Most regulators still view AI danger through the lens of traditional influence (deepfakes, fake news, propaganda). Those threats are real, but they are not as dangerous as the interactive, adaptive influence that could soon be deployed through conversational agents embedded in wearables.

This Is Coming Soon

  • Meta, Google, and Apple are racing to launch wearable AI products as quickly as they can.
  • To protect the public, policymakers need to abandon the “tool‑use” framing when regulating AI devices.

The “tool‑use” metaphor dates back 35 years to Steve Jobs’s description of the PC as a “bicycle of the mind.” A bicycle is a powerful tool that keeps the rider firmly in control. Wearable AI will flip this metaphor on its head, making us wonder who is steering the bicycle:

  • The human?
  • The AI agents whispering in the human’s ears?
  • The corporations that deployed the agents?

It will likely be a dangerous mix of all three.

Trust and the “AI Voice”

Users will likely trust the AI‑voices in their heads more than they should because these agents will:

  • Provide useful advice and information throughout daily life (educating, reminding, coaching, informing).
  • Blur the line between assistance and influence, making it hard to detect when an objective shifts from helping to persuading.

Watch the award‑winning short film Privacy Lost (2023) for a vivid illustration of these dangers, especially when devices include invasive features such as facial recognition (which Meta is reportedly adding to its glasses).

What Can We Do to Protect the Public?

  1. Recognize conversational AI as a new form of media—interactive, adaptive, individualized, and increasingly context‑aware.
  2. Treat it as “active influence” because it can adjust tactics in real time to overcome user resistance.
  3. Prohibit control loops that let conversational agents form feedback loops around users.
  4. Require AI agents to inform users whenever they transition to promotional content on behalf of a third party.

Without such protections, AI agents could become so persuasive that today’s targeted‑influence techniques will look quaint.

Notable Voices on the Issue

Louis Rosenberg – a pioneer of augmented reality and longtime AI researcher. He earned his PhD from Stanford, was a professor at California State University, and authored several books on the dangers of AI, including Arrival Mind and Our N (incomplete title in the source).

0 views
Back to Blog

Related posts

Read more »

GPT‑5.3 Instant

Today, we’re releasing an update to ChatGPT’s most‑used model that makes everyday conversations more consistently helpful and fluid. GPT‑5.3 Instant delivers mo...