The limits of bubble thinking: How AI breaks every historical analogy

Published: (March 9, 2026 at 09:00 AM EDT)
8 min read

Source: VentureBeat

Why markets keep overshooting

Every major technological shift produces the same outward symptoms:

  • Inflated expectations
  • High‑visibility failure

Dot‑com, mobile, and crypto all went through a phase where the world lost its sense of proportion.

Why does this keep happening?
Because markets don’t have a framework for discontinuous change. Discounted cash‑flow models assume steady, stable growth, and comparable‑company analyses assume the category already exists. So people assume the near future looks like the recent past—which doesn’t work when the underlying category itself is changing.

Most valuation tools are designed for incremental progress, so analysts look at quarterly forecasts and incremental improvements. They don’t know what to do with step changes, and they can’t model nonlinear adoption.

When you see capital overshooting or extreme dispersion of outcomes, that’s the market trying to value decade‑long bets using quarterly logic (which doesn’t work). That’s what a bubble actually is: an indication that no one yet knows how to price what’s coming. The uncertainty looks like invalidation, but it simply exposes the limits of existing frameworks.

The category error we keep making

When something new arrives, we reach for comparisons:

  • AI is like electricity.
  • AI is like computers.
  • AI is like the internet.
  • AI is like mobile.

These comparisons are comforting because each of those technologies produced massive, economy‑wide change and attracted enormous capital. They changed how work got done.

They also share something deeper: every one of those technologies extended human capability without replacing human cognition.

TechnologyWhat it poweredHuman role
ElectricityMachinesDeciding what to build
ComputersData processingInterpreting results
InternetInformation flowDeciding what matters
MobileComputing on the goAllocating attention

In every case, human intelligence anchored everything—it was the bottleneck.

AI is different because it performs cognitive work. If AI can actually think, then many of the expertise and hard‑won skills we’ve built our careers on may not be as defensible as we thought. A junior engineer who spent years developing intuition now works alongside a tool that has it instantly. A financial analyst known for variance analysis now shares that work with an algorithm. People aren’t completely sure where value actually lives anymore, and that’s terrifying.

From abstract to concrete questions

I talk to CFOs every week. Six months ago they asked abstract questions like:

“What is AI?”
“Should we have an AI strategy?”

Now the questions are concrete:

“Which parts of my team’s work no longer need to be done this way?”

That shift happened so quickly it’s already changing how resources get allocated.

Example: A founder I know started using Claude to write SQL queries that used to take her analyst a couple of days. Did she replace the analyst? Of course not. But she removed the bottleneck and no longer depends on him for quick answers. The analyst’s role changed completely:

  • Before: 60 % of time writing queries, supporting 3 stakeholders.
  • After: 10 % checking queries, 90 % delivering strategic recommendations, supporting 15 stakeholders.

The company didn’t reduce headcount or costs; the analyst’s impact multiplied.

Why historical comparisons fail

Tools like GitHub Copilot are compressing expertise. A junior engineer can now operate at a level that once required years of experience. And every time the tool is used, it learns. A hammer doesn’t improve just because you built a house with it, but AI tools do. When tools get better through use, the rate of improvement compounds. That dynamic doesn’t fit cleanly into any prior technological analogy, which is why the instinct to call this a “bubble” misses the actual point.

Previous technologies assumed a fixed ceiling on human cognition. They made us faster and stronger, but the limiting factor was always the same: how many smart people could we put on a problem? AI stretches that ceiling far beyond what we’re used to.

Before, understanding your business better usually meant one of three things:

  1. More data.
  2. More analysts.
  3. More experienced leaders.

The constraint was how much human attention and judgment you could afford. With AI, that constraint shifts. When analysis that once took days appears in seconds, the new constraint is knowing what to look for. What questions matter? The limiting factor stops being talent and starts being judgment.

The skeptics are right about the hype, and wrong about what it means

Let’s take the strongest version of the bubble argument at face value:

  • Maybe AI actually is overhyped, and most of these companies will fail.
  • Maybe we’re early, and real impact takes another five or ten years.

All of that could be completely true, and it still wouldn’t change the core point, which is this:

Even if the majority of AI startups fail, and even if adoption is way slower than expected, the underlying shift in how we create, price, and allocate value is already underway.

The “bubble” label captures our current inability to price discontinuous change, not the inevitability of collapse. The real story is that our valuation frameworks are catching up while AI continues to reshape the economics of cognition.

# AI Is Still the First Technology That Can Perform Knowledge Work

That doesn’t disappear because markets overshoot or expectations reset.  
The skeptics are right that the hype is inflated, but they’re wrong that inflated hype makes the technology irrelevant.  

We’ve seen this before: the dot‑com bubble was real, and **Pets.com** crashed and burned, but the internet still changed everything. Both things were true at the same time.

What Finance Leaders Are Focusing On

The finance leaders I’m working with are beyond arguing about whether AI matters.
Now they’re trying to understand which workflows change first and how fast they need to adapt. That conversation is happening quietly, underneath all the noise.

Workflows Collapsing First Share Three Properties

  1. They require expertise, but they’re repetitive.
  2. They’re bottlenecks to strategic work.
  3. They’re easy to verify but hard to generate.

These workflows are:

  • Important enough to pay for,
  • Not so strategic that automating them threatens competitive advantage.

They require skill, but that skill doesn’t compound dramatically with repetition, which makes them economically fragile—and explains why they’re already being automated away.

Where Humans Still Matter (For Now)

  • AI is great at recognizing trends, terrible at knowing which ones actually matter.
  • It can generate variance analysis, but it can’t tell you whether a 12 % swing in spend signals healthy growth or a deeper problem.
  • It can draft strategies, but it can’t tell you which strategy fits this market and this team in this exact moment.

Judgment under uncertainty and high‑stakes trade‑offs where the downside is catastrophic remain human responsibilities—for now.

When the constraint is no longer “do we have enough smart people,” the problem becomes one of priority:

  • What deserves attention?
  • What’s worth building next?

That’s where many founders get stuck. They ask if this is a bubble or if they’re too early, but those aren’t the most useful questions. The right one is:

“What can I build in the next year that creates real value, regardless of what valuations do?”

The Companies That Last

The companies that quietly iterate and embed AI into actual workflows that solve real problems will endure.

Example: CFOs

  • Buying AI because their board wants faster variance analysis.
  • Tired of hiring analysts who quit after six months.

That’s a real‑world problem that companies need to solve.

The same holds for investors: the long‑term winners will be those who tolerate uncertainty long enough to see what actually works.

This Time Is Actually Different

  • Short‑term: AI will disappoint. Many use cases won’t deliver what they promise, and a lot of companies formed in this wave won’t survive.
  • Long‑term: The technology will reshape every field that depends on knowledge work. Not all at once, and not evenly, but a decade from now it will be difficult to find a knowledge‑based industry that looks the same as it does today.

AI is different because intelligence itself—historically the core constraint of human innovation—has now become scalable. That’s an observable fact with measurable consequences.

The conversation about bubbles will fade, as it always does. What will remain are the systems that quietly adapted while everyone else argued about valuations.

The skeptics will have been right about the excess, but wrong about what actually mattered. Five years from now we’ll probably look back at today’s panic the same way we look back at people who dismissed the internet because a handful of companies failed. The winners will be those who were building while everyone else argued about valuations.

In time, those are the only stories anyone remembers.

Siqi Chen is co‑founder and CEO of Runway.

0 views
Back to Blog

Related posts

Read more »