Code Coherence: The Performance Metric No One Measures

Published: (February 11, 2026 at 09:10 PM EST)
4 min read
Source: Dev.to

Source: Dev.to

The Seven F12s

I hit F12 seven times trying to figure out why appointmentTimeUTC was missing from an API response. Each jump took me somewhere new:

  1. Component receiving the prop
  2. Parent passing it down
  3. Grid selecting the row
  4. Async fetch populating the grid
  5. Response‑mapping logic
  6. Service making the API call
  7. Endpoint definition

Seven hops. The code compiled, and I still couldn’t answer a simple question: Where does this value actually come from?

I gave up and asked Claude to trace the data flow. Two minutes later: nothing was broken, nothing was deprecated—it just… drifted. appointmentDateTimeUTC was the new name added two hours before a git pull and never communicated on Slack.

The problem wasn’t tooling. The system resisted understanding, and that resistance is a performance characteristic we don’t measure.


Coherence vs. Incoherence

A coherent system is compressible. You can describe it with a few small rules:

  • All external data is validated and normalized at the boundary.
  • One canonical representation per domain concept.
  • Components never consume raw DTOs.
  • Timestamps are always UTC ISO strings.

Four rules. Hold them in working memory—that’s compression.

An incoherent system feels chaotic. Coherence debt compounds, and it isn’t just an aesthetic issue. In a coherent system, most questions resolve within one layer:

"What is TripLeg?" → open TripLeg.ts
"What does the API return?" → open TripLegDto.ts
"What transforms it?" → open mapTripLegDto.ts

Three hops. Predictable. Stable.

In an incoherent system, the same question becomes a distributed trace:

Component → hook → thunk → service → config → endpoint A → endpoint B → conditional mapping → implicit null semantics → scattered date math

That’s cognitive latency, not runtime latency.


Cognitive Latency

You can approximate it:

Cognitive Latency = hops × context load per hop

Examples

  • 3 hops × 30 seconds = 90 seconds
  • 10 hops × 3 minutes = 30 minutes

…and that assumes you still remember why you started after hop 5.

The bottleneck in modern software isn’t CPU cycles; it’s time‑to‑understanding.


Measuring Cognitive Tax

In 2026 we now have empirical ways to measure cognitive latency. The fastest velocity isn’t raw tokens / second—it’s how few clarification loops an agent needs before it ships safe changes.

LLMs are compression engines. They thrive on:

  • Stable shapes
  • Consistent naming
  • Predictable layering

When a system forces narrative explanations instead of structural inference, that’s architectural entropy, not an AI limitation.


Impact on Velocity

Consider a 300 K‑line codebase:

  • 7 F12s instead of 3 → ≈ 10 extra minutes per investigation
  • 5 investigations per day × 5 developers → ≈ 4 hours per day lost to cognitive overhead

Over a month (20 work days):

  • ≈ 80 developer‑hours
  • At $95 / hour≈ $7,600 / month

That’s ≈ $90 k / year in cognitive tax.

We already measure bundle size in kilobytes. In 2015 performance meant flame graphs and Lighthouse scores; we optimized machines. In 2026 performance means:

  • Time‑to‑understanding
  • Time‑to‑safe‑change
  • Time‑to‑confidence

Coherence isn’t about eliminating hops; it’s about making them predictable and cheap to reason about. A clean service mesh with explicit contracts can have 12 physical hops and still feel like 3 cognitive ones. The old monolith with implicit shared state and date‑math roulette? Eight hops and three hours of dread.


Recommendations

If you care about velocity, optimize for compression:

  1. Normalize and validate at the boundary.
  2. One concept → one canonical shape.
  3. Make invariants explicit in types.
  4. Minimize hop distance.
  5. Count the jumps required to answer simple questions.

If it takes seven jumps to find the source of truth, your system is slow—even if it runs at 60 fps.

Many codebases survive on:

  • Institutional knowledge
  • AI assistance
  • Developer endurance

Remove the institutional knowledge and onboarding collapses. Remove the AI and the cognitive tax becomes unbearable. We’ve been compensating for incoherence with better tools instead of better boundaries. That works, but it’s not a sustainable solution.


Conclusion

Coherence isn’t polish. It’s a measurable performance metric.

How many F12s does it take to reach the source of truth in your codebase?

If you can’t answer that number, you’re not measuring performance. The number exists—you just haven’t been counting it.

0 views
Back to Blog

Related posts

Read more »