How I gave Claude Code access to real user behavior
Source: Dev.to
When I’m working with Claude Code, it’s great at reasoning about code but blind to everything that happens after deployment. It doesn’t know which flows users actually follow, where they hesitate, or what they never discover. I soon realized I was spending more time explaining user behavior to Claude than actually solving the problem, so I let Claude read the behavior directly.
Step 1: Capture only high‑signal user behavior
The first requirement was to capture real user behavior without slowing down the app or collecting noise. Most session‑replay tools capture the full DOM and every mutation, which adds noticeable overhead and a lot of irrelevant data.
For this setup the tracking script is intentionally lightweight and opinionated:
- captures only essential interaction signals
- does not record full DOM snapshots or mutations
- captures no PII
The goal isn’t replay fidelity; it’s to provide just enough signal for an LLM to understand how users interact with the app.
Step 2: Auto‑capture and structure everything
There is no manual event tagging. All interactions are auto‑captured and organized into a structured model:
- page paths
- elements users interact with
- navigation patterns
Over time this forms an inventory of the app, describing:
- which pages exist
- which elements matter
- how users move between them
Claude Code needs entities that real users interact with and the relationships between those entities and the codebase. This makes it possible to correlate statements like:
“users keep clicking this button”
with:
“this component in the code behaves like this”
Step 3: Select and pre‑process high‑signal sessions
Raw session data is still too noisy to feed directly to Claude Code. The system cherry‑picks high‑signal sessions, such as:
- frustrated sessions
- unusual navigation patterns
- sessions around specific pages or elements
These sessions are processed with an LLM to:
- summarize what happened
- extract common flows
- highlight friction points
- build visitor‑level profiles
The output is ready‑to‑use context rather than raw logs, keeping the information small, relevant, and useful.
Step 4: Expose the processed context via MCP
Claude Code supports MCP, which lets external systems expose tools that Claude can call. The MCP server provides several tools at different granularity:
- app‑level overviews
- page‑level behavior summaries
- specific visitor profiles
- individual sessions for deep dives
This enables a top‑down workflow:
- start from a high‑level usage overview
- zoom into a problematic page
- drill down into specific sessions or visitors
From Claude’s point of view, this is just structured context it can request when needed.
Step 5: Use it directly inside the terminal
Now everything happens inside Claude Code. Instead of prompts like:
“Users seem confused on onboarding”
I can ask:
- “Which pages have the highest frustration signals?”
- “How do users typically reach this feature?”
- “What happens in sessions where users abandon checkout?”
Claude answers based on pre‑processed real usage data, not guesses or manually described context.
Demo
Below is a short video showing this end‑to‑end workflow, entirely inside the terminal. (video omitted in text version)
What changed for me
The biggest difference wasn’t better answers, but less explanation:
- No dashboards.
- No screenshots.
- No manual summarizing before prompting.
I stayed in a single loop: code → usage → reasoning → code.
The tool behind this
I wrapped the approach into a tool called Lcontext. It combines:
- a lightweight, opinionated tracking script
- automatic structuring of app entities
- LLM‑based preprocessing of high‑signal sessions
- an MCP server exposing this context to Claude Code
It’s still early and evolving, but it’s been useful enough in my workflow to share publicly.
Links
- Project site:
- MCP server (open source):