AI Agents, Source Context, and Prompt History: A New Software Development Paradigm
Source: Dev.to
Software Development is Shifting from “Writing Code” to “Curating Intent”
Modern LLMs can produce a large part of an implementation if the AI agent is grounded in the project’s truth.
The Simplest Mental Model
- AI agent – like a developer with anterograde amnesia.
- Source context – curated, modular documentation that defines requirements, constraints, architecture, and invariants.
- Prompt history – the running dialogue that captures decisions, feedback, and rationale as the project evolves.
Together, these form a language‑native codebase: a project defined by intent and constraints, with code generated and maintained under human oversight.
This aligns with Andrej Karpathy’s “Software 3.0” framing: prompts and context increasingly behave like programs, and development becomes a conversation where natural language is the dominant control surface.
What the Claim Is (and Isn’t)
- Not: “Docs replace code.”
- Yes: Context becomes the project’s constitution; code remains the executable artifact.
AI‑First Repo as a Layered System
| Layer | Description |
|---|---|
| 1️⃣ Code | The executable artifact (still necessary). |
| 2️⃣ Source context | The normative spec – “what must be true”. |
| 3️⃣ Prompt history | Working memory + rationale – “why we chose this”. |
| 4️⃣ Agent | The compiler/contributor that converts (2)+(3) into (1) under review. |
The breakthrough is treating layers 2 and 3 as versioned, reviewed, and intentionally maintained—not as accidental chat logs.
Practical Pattern: Module‑Scoped Context Files
Large systems fail with a single monolithic context file for the same reason monolithic codebases rot: everything is coupled.
Instead, create small, focused context files that act as “README for humans and agents”.
product_context.md
orders_context.md
payment_context.md
user_auth_context.md
Benefits
- Clarity – agents load only what matters.
- Separation of concerns – requirements and constraints evolve locally.
- Easier onboarding – humans and agents ramp faster.
- Parallel work – multiple agents can operate safely in different domains.
Reliable Context File Shape
Each _context.md should contain:
- Purpose / Non‑goals
- Public API / Contracts (endpoints, events, schemas)
- Core invariants (“must always hold”)
- Data model (field meaning; avoid raw schema dumps)
- Workflows / State machines
- Security & privacy constraints
- Operational constraints (latency, retries, idempotency)
- Failure modes & recovery
- Observability (logs/metrics/traces expectations)
- Test expectations (golden paths + edge cases)
- Changelog (dated, human‑readable)
Key: Context should state constraints and invariants, not mirror implementation details.
From Conversation to Governance
- Conversation = meeting transcript.
- Context = meeting minutes.
Distilling Prompt History
A mature workflow actively extracts decisions from the prompt history and updates context files:
- If a decision affects future work → record it.
- If a rule is corrected → add it to the relevant context file.
- If a decision changes → mark the old one as superseded.
This turns “chat” into governance.
Updating Context When Implementing Features
When an agent implements a feature, it should update the relevant context file(s) in the same PR:
- Add or revise invariants/contracts.
- Record edge cases discovered during implementation.
- Add a dated changelog entry.
Example (simplified)
# orders_context.md – Updated 2024‑11‑03
## Core Invariants
- **Order total** must always be ≥ 0.
- **Order status** transitions must follow the state machine:
`created → paid → shipped → delivered`.
## New Edge Case
- If a payment is refunded after `shipped`, the order must transition to `refunded` and trigger a restock workflow.
## Changelog
- **2024‑11‑03** – Added invariant for non‑negative order total and documented refund‑after‑ship edge case.
TL;DR
- Treat context as a versioned constitution.
- Keep context files small, focused, and stable.
- Distill decisions from chat into context updates.
- Let the AI agent generate code from (source context + prompt history) under human review.
By doing so, teams can harness LLMs effectively while maintaining rigorous, auditable software development practices.
Changelog
2026‑02‑02
- Added wishlist support: users can store product IDs in
wishlistItems. - Added endpoints:
GET /users/{id}/wishlist,POST /users/{id}/wishlist,DELETE /users/{id}/wishlist. - Enforced owner‑only access and idempotent add/remove behavior.
Preventing Context Drift
Docs and code must change together. A practical AI‑first loop looks like this:
| Phase | Description |
|---|---|
| Plan | Human states the goal. Agent proposes an approach and lists affected modules. |
| Review | Human checks architecture, security, and product intent. Agent revises the plan. |
| Implement | Agent writes code and tests, following context rules. |
| Verify | Run tests, static checks, and human review. Agent fixes any issues. |
| Update Context | Agent updates module context files and the changelog. Human reviews the changes. |
This is not “autopilot.” It’s delegation with constraints.
Example: Adding a Wishlist to an Existing E‑Commerce System
Modules Involved
product_context.md– product IDs, catalog lookup rules.user_auth_context.md– identity, authorization constraints.
Agent’s Initial Load
- Global rules (
AGENTS.md). product_context.md.user_auth_context.md.- Most recent changelog entries.
Proposed Implementation
- Data Model – Store
wishlistItems: string[](product IDs) on the user profile. - Endpoints – Add, view, and remove wishlist items.
- Validation – Verify the product exists before adding.
- Idempotency – Prevent duplicate entries.
- Authorization – Enforce “owner‑only” access.
Human‑Provided Additions
- Removal endpoint details.
- Maximum wishlist size.
- Logging requirements.
Outcome
- Agent implements, tests, and updates both context files.
- The feedback‑to‑distillation loop makes future work faster and safer.
Benefits of This Workflow
| Benefit | Explanation |
|---|---|
| Speed | Agent handles boilerplate and rapid iteration. |
| Maintainability | Context stays synchronized because it’s part of the change flow. |
| Onboarding | Context files serve as human‑readable, module‑level truth. |
| Consistency | Standards live in context, not in tribal memory. |
| Compounding Improvement | Every correction becomes durable guidance. |
Risks & Considerations
- Context Size – Too much context can cause “lost in the middle.” |
- Ambiguity – Gaps let agents hallucinate; constraints must be explicit. |
- Maintenance Overhead – Context files need ownership and regular review. |
- Cost/Tooling – Large contexts can be expensive to process. |
- Security – Context is a new attack surface; treat it accordingly. |
Further Reading & Example Repository
A hands‑on example of this paradigm is available at:
🔗 github.com/cangiremir/context-driven-ai-development
The repository demonstrates:
- A modular source‑context architecture.
- Human‑in‑the‑loop governance rules.
- Example context files, ADRs, and prompt templates aligned with the ideas described above.
Vision
If this trajectory continues, a repository evolves from a pile of source code into a knowledge system:
- Context docs define intent and constraints.
- Decisions & rationales are preserved.
- Agents translate intent into code.
- Code is continuously verified and regenerated.
In that world, a software engineer’s role expands: you’re not only writing code—you’re designing systems of constraints that both humans and agents can execute.