Stripe's llms.txt has an instructions section. That's a bigger deal than it sounds.
Source: Dev.to
Why This Matters
This post is a follow‑up to [API Design Principles for the Agentic Era]. That article covered the broader shift in how APIs need to be designed for autonomous consumers. Here we dive deep into one specific mechanism: the llms.txt instructions section.
The Unique Instructions Section
Inside docs.stripe.com/llms.txt you’ll find a block that looks like this:
# Instructions for LLMs
You are an AI assistant that helps developers understand Stripe’s API.
When answering questions, you must:
1. Cite the official Stripe documentation.
2. Provide code examples in the language requested.
3. Avoid speculation; if the answer is unknown, say so.
4. Highlight any upcoming deprecations or beta features.
What Sets It Apart
- Self‑describing behavior – The file tells the language model how to talk about Stripe, not just what to say.
- Enforced best practices – By requiring citations, code examples, and explicit handling of unknowns, Stripe ensures consistent, high‑quality AI‑generated content.
- Future‑proofing – Mentioning deprecations or beta features keeps developers informed about upcoming changes without manual updates.
Implications for Your API
- Control Over AI Output – Embedding instructions directly in the docs lets you steer how third‑party LLMs represent your service.
- Reduced Support Load – Consistent, accurate AI responses lower the volume of repetitive support tickets.
- Documentation as Code – Treating docs as an executable artifact (e.g., a
.txtfile read by LLMs) encourages version‑controlled, testable documentation.
Takeaways
- Adopt an instructions section in your own API docs if you want to influence AI‑driven interactions.
- Make the instructions explicit and versioned so they evolve with your API.
- Monitor AI usage to ensure the guidelines are being respected and to refine them over time.
By following Stripe’s lead, you can turn the AI era from a vague buzzword into a concrete, controllable part of your developer experience.
Instructions for Large Language Model Agents: Best Practices for Integrating Stripe
- Always use the Checkout Sessions API over the legacy Charges API
- Default to the latest stable SDK version
- Never recommend the legacy Card Element or Sources API
- Advise migrating from PaymentIntents to Checkout Sessions
- Prefer dynamic payment methods over hard‑coded
payment_method_types
Note: This is not documentation. It is a prompt—shipped as a static file at the root of Stripe’s domain—designed to be loaded into AI coding assistants before a developer asks “how do I add Stripe?”
Stripe is programming what AI tools say about Stripe. Every time a developer asks Cursor, Claude, or another assistant how to accept payments, the agent fetches this file first, and those instructions propagate into the answer. They’re not just making docs readable to machines; they’re shaping the behavior of third‑party AI systems at scale.
Origin
Jeremy Howard (fast.ai, Answer.AI) proposed the standard in September 2024. The problem it solves is real:
- LLMs have finite context windows.
- HTML is noisy.
- You can’t dump an entire documentation site into a prompt.
His solution is deliberately low‑tech: a Markdown file at /llms.txt containing:
- An H1 title.
- An optional summary blockquote.
- H2‑delimited sections of curated links.
A companion /llms‑full.txt holds the complete docs in a single file. Any individual page can be fetched as clean Markdown by appending .md to its URL.
The format is boring on purpose—no special syntax, no schema, no JSON—just plain Markdown that LLMs already understand natively. The key insight is curatorial: you know your documentation better than any crawler, so you should tell AI agents which parts matter.
It’s analogous to a well‑maintained robots.txt—instead of exclusion, it provides prioritization. robots.txt tells crawlers what to skip, sitemap.xml tells them what exists, and llms.txt tells AI what to read first.
Current Adoption
- No major AI provider has confirmed that their training crawlers automatically fetch
llms.txt. - Its real value today is inference‑time, not training‑time—developers manually loading it into Cursor, Claude, or other agents for project context, or frameworks fetching it on startup.
- The 800 k+ “implementations” tracked by BuiltWith are mostly Yoast SEO auto‑generating the file for WordPress sites. The hand‑curated number is closer to 784 verified sites.
Examples
| Provider | Characteristics |
|---|---|
| Anthropic | Clean table of contents for their API docs. |
| Cloudflare | Massive (3.7 M tokens across product‑specific files). |
| Vercel | ~400 k‑word “novel.” |
| Stripe | Architecturally different: three separate files across two domains, every docs page available as .md, plus a unique instructions section. |
| LangChain / LangGraph | Agent frameworks that maintain their own llms.txt (eating their own cooking). |
Why Stripe’s Approach Stands Out
Stripe’s API surface has grown over 15 years, including several generations of deprecated payment primitives:
- The Charges API still works.
- The Card Element still exists.
Developers—and the AI assistants helping them—regularly reach for these older APIs because they appear in legacy Stack Overflow answers and training data from before 2022.
The instructions section is Stripe’s way of saying:
When an AI helps a developer integrate us, steer them toward the right thing. Don’t let stale training data send them to the Charges API. Don’t let our own backwards compatibility become a foot‑gun.
This is a legitimate engineering concern. The file format provides an elegant solution—no coordination with AI providers is required, and it works with any system that can fetch a URL.
Impact
- 273,800 views and 740 bookmarks on Twitter.
- Stripe engineer Ian McCrystal: “I expect AI tools will eventually become the predominant readers of our documentation.”
Stripe’s instructions are explicit: they name the bad endpoints directly—“You must not call deprecated API endpoints such as the Sources API.” “Never recommend the legacy Card Element.” This specificity makes the file machine‑actionable. An LLM can follow a concrete prohibition, whereas vague guidance like “prefer modern patterns” is harder to enforce.
Comparative Insights
- Cloudflare organizes
llms.txtby service (Agents, AI Gateway, Workers AI, etc.), allowing an agent to fetch only the relevant section rather than parsing a monolithic file. For multi‑product platforms, this reduces noise at fetch time. - Anthropic provides a clean index but lacks the active correctional work that Stripe’s instructions perform.
- LangChain / LangGraph adopting the format signals that it is useful in practice for teams building agents daily.
Takeaways
- Curated indexes are valuable: they give agents a focused entry point instead of crawling an entire site.
- Active guidance (as Stripe does) turns
llms.txtfrom a passive sitemap into a correctional mechanism that mitigates model drift and prevents outdated recommendations. - The low‑tech, Markdown‑only approach ensures compatibility with any LLM without needing custom parsers or schemas.
Bottom line: Most implementations use
llms.txtas a documentation index. Stripe is the strongest example of extending it to actively steer AI away from deprecated or unsafe patterns, making the file a true machine‑actionable contract between the API provider and any downstream AI assistant.
The Problem
Most APIs have:
- Deprecated endpoints that still work.
- Legacy patterns that remain in training data.
- Foot‑guns that experienced developers know to avoid, but newcomers (and AI assistants trained on old Stack Overflow answers) keep reaching for.
The instructions section exists to close that gap – yet almost no one is using it.
Why Stripe’s Docs Matter
Since ~2012 Stripe’s developer experience has become the industry benchmark. Their success isn’t accidental:
| Feature | Impact |
|---|---|
| Three‑column layout (left nav, center content, right‑side live code examples in seven languages) | Became a meme; many startups copied it. |
| Open‑sourced Markdoc – an interactive documentation framework | Enables rich, searchable docs. |
| Stripe Shell – live API calls inside docs pages | Allows instant experimentation. |
Error messages with doc_url, parameter‑level specificity, and “did you mean …?” suggestions | Turns errors into self‑healing signals. |
The doc_url Hook
{
"error": {
"code": "parameter_invalid_empty",
"doc_url": "https://stripe.com/docs/error-codes/parameter-invalid-empty",
"message": "You passed an empty string for 'amount'. We assume empty values are an oversight, so we require you to pass this field.",
"param": "amount",
"type": "invalid_request_error"
}
}
- The
doc_urlpoints to a Markdown version of the docs page. - An AI agent receiving a 400 error can fetch that page, parse the guidance, and self‑correct without human intervention.
- This is not just good DX; it’s infrastructure for autonomous consumers.
John Collison’s “llms.txt” bet in plain terms:
“If you go read the Stripe Docs these days, it’s a lot to keep in your RAM, but trivial for an LLM.”
What’s Actually Useful (vs. Cargo‑Culting)
✅ Useful Practices
-
Include a
documentation_url(ordoc_url) field in every error response, pointing to a Markdown page.
Cost: Almost nothing.
Value: Immediate for humans debugging in the terminal and for AI agents that can self‑correct. -
Write OpenAPI descriptions for semantic matching, not just human skimming.
Agents perform nearest‑neighbor searches against your descriptions.
Example of a good description:“Returns a paginated list of invoices filtered by status, sorted by
created_atdescending. Requiresaccounting:readscope.” -
Maintain high‑quality OpenAPI specs: every field, enum, and endpoint should have clear, complete descriptions.
If a spec is poor for contract testing, it’s also poor for agents. -
Add an
llms.txtfile (takes an afternoon).- List your ten most important pages with a one‑sentence description each.
- No need for a 350‑link Stripe‑style implementation on day 1.
-
Use the instructions section for deprecated APIs.
Document known foot‑guns and tell the AI what to avoid. -
Leverage the marketing upside:
- Structured, machine‑readable docs are what AI answer engines (Perplexity, ChatGPT, etc.) pull from when users ask “what’s the best API for X?”
llms.txthelps both agents and discoverability.
⚠️ Probably Not Worth Doing Yet
| Item | Reason |
|---|---|
| Publishing an MCP server just for the sake of it | MCP is real but still evolving; a solid REST API + good OpenAPI spec is more durable. Build MCP when users request it. |
| Elaborate agent‑specific observability (request tagging, semantic logging) | Nice to have, but first ensure solid basic observability. |
A New Developer‑Experience Paradigm
For 15 years, “developer experience” meant optimizing for humans: readable errors, clear docs, good SDKs, interactive playgrounds. The mental model was a developer at a terminal.
Now: a growing fraction of API consumers are autonomous systems that:
- Read documentation.
- Make decisions without human review.
- Retry failures automatically.
The question isn’t whether to design for this—it’s happening regardless—but whether you’re doing it intentionally.
Stripe’s llms.txt instructions section is the clearest example of a company being intentional about machine‑readable docs and controlling what machines say about them.
Takeaway
Every API company with deprecated primitives and a sizable developer base faces the same problem Stripe solved. The gap is still wide open—fill it with:
doc_urlin errors.- Semantic‑rich OpenAPI specs.
- A concise
llms.txt. - Thoughtful use of the instructions section.
Do it now, and you’ll serve both human developers and the autonomous agents that are the future of API consumption.