I Just Want to Look Up What I Asked Claude Last Tuesday

Published: (March 9, 2026 at 08:30 PM EDT)
3 min read
Source: Dev.to

Source: Dev.to

The Problem

Every developer using AI coding assistants has this moment:

“Wait, how did I fix that auth bug last week? I spent 45 minutes with Claude on it…”

You open your terminal history – useless. You check your git log – just commit messages. The actual conversation—the prompts you tried, the reasoning, the code states at each step—is gone.

We generate dozens of AI coding sessions per week, each containing valuable context:

  • The exact prompts that worked (and the ones that didn’t)
  • Why you chose approach B over approach A
  • The intermediate code states before the final solution
  • Terminal outputs that led to breakthroughs

But there’s no good way to go back and look at any of it.

Your browser history shows “claude.ai” 47 times. Super helpful, but not enough.

I didn’t want a fancy workflow tool or to “transform my coding process.” I just wanted to:

  • Find the session where I debugged the WebSocket reconnection issue
  • See the exact prompts I used
  • Look at the code at each step of the conversation

That’s it: a history viewer for AI coding sessions—like browser history, but actually useful.

Findings from Tracking Sessions

I tracked this for two weeks and discovered:

  • 3–4 times per week I wanted to reference a previous AI session
  • ~20 minutes each time spent trying to recreate the context
  • 1–1.5 hours per week wasted on re‑prompting things I’d already solved

Multiply that across a team of five developers and you lose a full workday every week.

What Really Matters

After experimenting with different approaches, the essential features are:

  • Searchable conversations – not just full‑text search, but the ability to find sessions by problem, touched files, or tools used.
  • Code state at each step – when replaying a session, you need to see what the code looked like at message #5, not just the final result.
  • Terminal context – half the debugging happens in the terminal; missing terminal I/O means missing the plot.
  • Cross‑tool compatibility – sessions should work across Claude Code, Cursor, Codex, Gemini CLI, etc., without being locked to a single tool.

Introducing Mantra

I’ve been working on Mantra to solve exactly this. It records your AI coding sessions—the full conversation, terminal I/O, and code changes—and lets you replay them later.

Key insight: It’s not about changing how you code; it’s about being able to look back at how you coded.

Core Benefits

  • Invisible recording – no friction; if it slows you down, you’ll turn it off.
  • Fast replay – you’re looking for one specific moment, not watching a movie.
  • Built‑in security – sessions contain API keys, credentials, internal URLs. Sensitive content detection and redaction are mandatory.

The setup takes about 2 minutes, then runs in the background with no workflow changes required.

Key Lessons Learned

  1. Recording must be invisible – any added friction kills adoption.
  2. Replay must be fast – developers need pinpoint access, not a full playback.
  3. Security is non‑negotiable – automatic detection and redaction of sensitive data are essential.

Call to Action

If you’ve ever wished you could just look up what you asked your AI assistant last Tuesday, give Mantra a try. It’s free and works with Claude Code, Cursor, Codex, and Gemini CLI.

What’s your current approach for keeping track of AI coding sessions? I’m curious whether others have found workarounds or if everyone’s just re‑prompting from scratch.

0 views
Back to Blog

Related posts

Read more »