I’d buy Google’s AI glasses over Apple’s AI pin any day

Published: (February 8, 2026 at 05:30 AM EST)
7 min read

Source: Android Authority

Overview


Paul Jones / Android Authority

When you take a zoomed‑out view of the AI landscape, you see tools, services, and products mushrooming from literally every corner(source). It feels like AI is now a part of our lives in every way that matters.

But zoom in a little further, and you’ll notice something else: it’s the AI companies fighting for your attention and racing to shove AI into every possible place, whether it makes sense or not. AI pins that clip onto your shirt collar as an omnipresent AI tool fall squarely into this category. They feel forced, especially when there are clearly better form factors that could do the job far more naturally.

Even if the rumored Apple AI pin becomes a reality(source), I’d still pick Google’s AI glasses over it any day.

Which AI hardware makes more sense to you?

5 votes

AI Pins Are Doomed From Day One


Image credit: Humane

Donald Norman’s The Design of Everyday Things underlines a simple reality: products succeed only when they align with users’ mental models. They fail when people are asked to adopt entirely new behaviours instead of extending existing ones that already feel intuitive.

  • Touchscreens became popular quickly because we were already using our thumbs to type on BlackBerry keyboards. Touch wasn’t an alien interaction; it simply replaced physical keys with something more convenient—a light tap instead of a firm press.
  • Device makers should focus on reducing friction, not on expecting people to adapt to new form factors overnight.

We don’t even need to look back that far to see the problem. The Humane AI Pin already fell flat despite its novelty and high‑tech hardware that should have attracted early adopters. See the coverage: Humane AI Pin shut down.

Wearables Have High Abandonment Rates

Abandonment rates for wearables are already high (see the study on abandonment rates). I’ve even been part of that band: Returned to smartwatches after two years.

Given this context, manufacturers should:

  1. Reduce friction rather than introduce brand‑new form factors.
  2. Make new hardware feel “normal.” Smart glasses were rejected when they looked absurd and robotic; acceptance grew only after they began to look ordinary.
  3. Avoid adding another cyborg‑esque piece of hardware to clothing.

Novelty Isn’t Enough

Novelty cannot exist for its own sake. Hardware needs a clear reason to exist—something it can do that existing devices cannot.

  • Earbuds can’t read brain signals; for that you’d need dedicated Neuralink‑like hardware.
  • Why would anyone choose a pin hanging off a T‑shirt (making the wearer look like a cyborg) when existing devices already match or surpass its capabilities with ease?

In short, the AI pin fails because it offers no unique functional advantage and forces users to adopt an unfamiliar, friction‑inducing form factor. The path forward for hardware innovation is to extend, not replace, existing mental models and interaction patterns.

You Are Already Wearing the Future

Reducing AI’s potential to “just another chatbot” is short‑sighted. A dedicated AI pin has no eyes, relies entirely on ambient audio, responds only in voice, and expects voice input in return. That form factor is incredibly limiting—especially when most of us already wear devices that have been far smarter for well over a decade.

Why Visual AI Is the Next Frontier

  • Data richness – Companies are pouring investment into visual AI because what you see (and hear) is the next data frontier.
  • Contextual assistance – Users want an assistant that can stay with them all day without being intrusive, understands context, and helps discreetly.
  • Multimodal interaction – Combining vision, audio, touch, and haptics creates a far richer experience than a single‑mode device.

When you get better functionality and richer multimodal interaction, it’s hard to justify a device that relies on just one mode of interaction.

Smart Glasses: The Emerging AI Platform

Smart glasses—particularly those with augmented‑reality (AR) displays—are shaping up as the next big AI project for nearly every major tech company. They offer far more consumer‑friendly use cases than virtual‑reality headsets:

  • Hands‑free navigation – Directions overlaid directly in your field of view, eliminating the need to glance at a phone.
  • Candid photography – Capture moments without pointing a camera at someone, reducing self‑consciousness.
  • Seamless pairing – Glasses can pair with the smartwatch on your wrist, enabling input through hand gestures, taps, or the watch’s touchscreen.

If voice commands don’t work in a packed subway, a discreet finger tap or a quick interaction on your watch can.

Having multiple output surfaces—visuals in front of your eyes plus haptics or a touchscreen on your wrist—makes a single‑mode AI pin feel inadequate.

Bottom Line

I would never settle for an AI pin—whether from Humane, Apple, or any other vendor—that supports only a single interaction mechanism. The future belongs to devices that blend vision, audio, touch, and haptics into a cohesive, context‑aware experience.

References

Google’s Real‑World Head Start

Google Maps UI in Android XR Glasses – C. Scott Brown / Android Authority

Google’s massive collection of personal data is often portrayed as a privacy concern—and rightly so. But for AI products, that same data translates into a deep, real‑world understanding.

  • Street View covers cities and towns worldwide, not just a handful of global hubs. Its constantly refreshed imagery shows how streets actually look.
  • Google Lens adds personalized visual comprehension, letting users capture and analyze everyday objects.
  • Google Assistant (and now Gemini) has spent years fielding everyday voice queries.

When you combine Street View, Lens, and Gemini, Google arguably has the most comprehensive grasp of our physical surroundings of any company. It wouldn’t be surprising if Apple indirectly taps into that knowledge—as it has done before with Visual Intelligence and, more recently, Siri [¹].

We’re already intimately familiar with how Android works on our devices. Extending existing gestures and behaviors—whether for navigation, notifications, reminders, or translation—strengthens Google’s case. Historically, building on established habits attracts more users than trying to rewrite them from scratch.

References

  1. How good is Google Gemini 3? – Android Authority
  2. Siri‑Gemini integration could be awful for Android – Android Authority

Inevitable vs. Statement Pieces

Halliday wearing AI glasses
Hadlee Simons / Android Authority

There’s a clear difference between the future some companies envision and the one that’s actually likely to arrive. Smartphones aren’t disappearing in a couple of years, replaced entirely by AI pins. Habits don’t form in weeks, nor do they change as easily.

Trying to force AI pins into daily routines might work as statement pieces—the kind you see clipped onto jackets on intellectual‑sounding podcasts—but making them mainstream is a far tougher ask, even with Apple’s resources and pop‑culture effect.

Just as phones and watches gradually became smarter over time, eyewear is heading in the same direction, and AI is only accelerating that shift.

What is inevitable is making existing hardware smarter and more AI‑ready. One of AI’s biggest strengths is how hardware‑agnostic it can be. Google running Gemini on years‑old Home speaker hardware proves that point.

The items we’ve worn on our wrists and noses for centuries are far more likely to drive the next phase of AI hardware than an awkward accessory that doesn’t belong with everyday outfits. If Apple truly wants to lean into its fashion‑forward image, focusing on AR glasses would make far more sense. Or it could simply borrow from Google’s playbook—something it’s never shied away from.


Follow

Thank you for being part of our community. Please read our Comment Policy before posting.

0 views
Back to Blog

Related posts

Read more »