How I Can Still Consult AI About Decisions Made Two Months Ago
Source: Dev.to
Introduction
A few months into using AI for real development work, I noticed something unusual: I could still ask the AI about decisions we made two months ago—not guesses, but concrete advice. This article explains why that is possible and why it has nothing to do with AI memory.
The Symptom
If you’ve used AI in a real project for more than a few weeks, you’ve probably seen:
- The AI forgets earlier decisions.
- Old ideas resurface unexpectedly.
- Rollbacks break context.
- You no longer remember why something was done.
These issues are often blamed on:
- Context‑window limits
- Lack of persistent memory
- Model limitations
But that diagnosis is wrong. The problem isn’t memory; it’s context.
How I Query the AI
I don’t ask the AI to remember past conversations. I ask something much simpler:
“Please look at past decision logs and advise on XXX.”
If I remember roughly when the decision was made, I might add that as a hint; otherwise, I don’t. No strict prompt, no long system prompt dictating behavior. The AI is free to explore, speculate, and make mistakes. The control mechanism is the structure of the information it can access, not the prompt.
Decision History as the Single Source of Truth
Only decisions become history. I do not preserve:
- Session logs
- Daily notes
- Conversational transcripts
- Unfinished thoughts
Instead, I keep a single source of truth: decision diffs. Each entry records:
- What was decided
- Why it was decided
- What changed
- What remained unresolved
If something did not result in a decision, it does not enter history. Exploration is encouraged, but it is stored elsewhere (experiments, probes, partial designs, failed attempts) and never treated as canonical.
AI‑Generated Decision Records
The decision history is generated by AI, not written by humans. This is intentional because:
- Humans tend to rewrite history, smooth out uncertainty, and remove unresolved points.
- AI tends to capture decisions as they happened, preserve uncertainty, and explicitly list open questions.
The human role is simply to verify factual accuracy and correct mistakes. We do not polish the narrative; these records are snapshots, not stories.
Reconstructing Context
When I ask the AI about something from two months ago, neither of us relies on memory. We reconstruct context from:
- The decision history that still exists
- The code that followed
- The contracts that were shaped
Everything else has already been filtered out.
Git as Infrastructure
In this setup, Git does more than version code. It also versions:
- Decisions
- Reasoning
- Collaboration rules
A rollback is not just a code reset; it is a context reset. The AI can only reason about what Git contains. Anything outside Git does not exist for the AI.
This is not a logging technique, a prompt‑engineering trick, or an AI memory feature. It is infrastructure. Long‑term AI collaboration requires infrastructure, and that infrastructure lives in the repository.
Why This Matters
Most discussions about AI‑assisted development focus on prompts, agents, tools, and models. Very few talk about what must exist for long‑term reasoning to work at all. The problem only becomes visible after weeks or months of real use, which is why I’m writing this now.
If you can still consult AI about decisions made months ago, it’s because you gave it a past worth consulting.
This article is part of the Context as Infrastructure series—exploring how long‑term AI collaboration depends on structure, not memory.