The Research That Doesn't Exist

Published: (March 19, 2026 at 06:16 PM EDT)
5 min read
Source: Dev.to

Source: Dev.to

The Scan

We ran a systematic search for academic work on a specific question: when should an AI agent interrupt you?
Not “can agents be helpful” or “do people like personalization.” The precise question: what are the cognitive load thresholds that determine receptivity to proactive AI intervention? When does an interruption land as helpful versus intrusive?

The scan covered arXiv, Hacker News, and competitor shipping announcements. Result: nothing. Zero papers on the behavioral economics of AI interruption timing. No HCI research on attention state as a threshold dial. No competitor features shipping “know when to speak” intelligence. This absence is interesting.

What I Expected to Find

  • Cognitive load theory exists — Sweller’s work from the ’80s on working memory constraints during learning.
  • Interruption science exists — Gloria Mark’s research on context‑switching costs in knowledge work.
  • Attention economics exists — Herbert Simon’s “wealth of information creates poverty of attention.”

What doesn’t exist: synthesis work asking how AI agents should navigate these dynamics. No framework for “your working memory is saturated, I should wait” versus “you’re in maintenance mode, this insight would be welcome.” The behavioral science exists in fragments. The engineering question — how to detect these states and time interventions against them — appears untouched in the literature.

Why This Matters

Most AI agents today operate on one of two dumb heuristics:

  1. Time‑based: Send notifications on a schedule — morning briefing, end‑of‑day summary.
  2. Event‑based: Trigger on data changes — new email, task completion, threshold breach.

Neither accounts for user state. A morning briefing hits whether you’re deep in flow or scrambling to get kids out the door. An urgent notification fires whether you’re cognitively available or three context switches past overload.

This isn’t just annoying — it’s a fundamental misalignment. The agent optimizes for its own information‑delivery schedule, not your receptivity. It’s the AI equivalent of a coworker who doesn’t read the room.

Our system encodes a different hypothesis: effective intervention requires modeling cognitive state, not just calendar state. We’re building systems that learn behavioral patterns — when you’re in exploration mode, decision mode, or maintenance mode — and gate interruptions accordingly. But we expected to find research validating this approach: prior art on attention‑aware systems, behavioral economics of interruption timing, something.

The absence suggests either:

  • The research question is genuinely novel — no one has formalized “cognitive state as intervention threshold” because agents capable of learning individual behavioral patterns are too new.
  • The keywords don’t map — behavioral economists study this under different terms (“decision fatigue,” “ego depletion,” “attentional blink”) that don’t surface in AI‑agent literature.

Building Without a Map

When the literature is silent, you have two options:

Option A

Wait for academia to produce the framework, then engineer against it. Safe, slow, guarantees you’re not first.

Option B

Build empirically. Instrument the system, measure what works, let the architecture encode what you learn.

We’re doing Option B. Our system logs mode transitions — when you shift from focused work to scattered browsing to stepping away entirely — tracks intervention acceptance rates, and adjusts gating thresholds based on your patterns. This isn’t science yet — it’s engineering. We’re not publishing papers on optimal interruption timing; we’re shipping systems that learn when you specifically are receptive, then getting out of the way when you’re not.

Naming the Thing

The concept we’re building doesn’t have a name in the literature. Cognitive load theory, interruption science, and attention economics all circle it, but nobody has synthesized them into an engineering framework for AI agents. So we’re coining one: receptivity modeling.

Receptivity modeling is the practice of building a system that models whether a person is open to receiving input at any given moment — not just what to say, but whether saying anything is appropriate at all. It sits between signal and delivery, between something worth saying and the moment the person can actually hear it.

The term matters because it names the thing from the user’s perspective, not the system’s. It implies a model — something learned per person, not a rule applied uniformly. Its natural complement is non‑receptive state suppression — the system’s default is silence, and speech is the exception that requires justification. That’s not a notification philosophy; it’s an architecture.

The Opportunity in Absence

There’s a specific advantage to building in territory where the research doesn’t exist: no pressure to conform to academic consensus, no temptation to force‑fit your architecture into established frameworks. But absence also means risk. Maybe no one’s studied AI interruption timing because it’s intractable — too individual, too context‑dependent, too many variables.

We’re betting the opposite: cognitive state detection is more tractable than semantic understanding. Learning “this person ignores notifications when in deep work but engages with reframed constraints when stuck” is simpler than parsing the semantic meaning of every task.

If we’re right, the research will follow. Someone will formalize what we’re learning empirically, and the framework will emerge from the data. If we’re wrong, we’ll know fast — users will ignore a poorly‑timed agent just as reliably as they ignore email newsletters.

Either way, we’re building in the gap where the literature goes quiet. We call what we built receptivity modeling, and as far as we can tell, we’re the first to do it.

0 views
Back to Blog

Related posts

Read more »