Treating Prompts Like Code: A Content Engineer's AI Workflow
Source: Pulumi Blog
The Real Problem AI Solves
Everyone talks about AI making you faster. That’s not wrong, but it’s not the most interesting part — at least not for me.
The most interesting part is what it does to the starting problem. I have an ADHD brain (not formally diagnosed, but with enough self‑recognition to know what’s going on). I know what that means for my relationship with most tasks: I can see the problem, I understand it, I want to fix it, and then the sheer weight of starting crushes me flat.
When I’m stuck on a task, the issue is almost never that I don’t know what to do. It’s that my brain is trying to hold the entire finished product in working memory while simultaneously producing the first step. That’s an enormous cognitive tax, and for an ADHD brain it’s often insurmountable.
Talking through a problem conversationally is a completely different cognitive load. I can tell Claude:
“Here’s the issue, here’s what I’m trying to accomplish, here’s what’s weird about it.”
and suddenly I’m not staring at a blank page anymore. I’m in a conversation. The scaffold exists. I can build on it.
That dynamic isn’t new for me. In a previous role writing training modules at Microsoft, I did some of my best work not because the work was easy, but because I had a collaborator—a friend to think out loud with, someone to say “okay, so what are we actually trying to say here?” That conversational scaffolding was the difference between spinning and shipping.
In my current role as a team of one, AI turned out to be that collaborator.
This isn’t really a productivity story.
It’s closer to a cognitive accommodation story. And I’d bet a lot of people—diagnosed or not—will recognize what I’m describing.
Treating Prompts Like Code
If conversational scaffolding could lower my own activation energy, the next question was obvious: could I build that for anyone who needed it?
I knew I wanted to use AI to solve this problem, but I didn’t want to just write a bunch of one‑off prompts. That would be a maintenance nightmare, and it wouldn’t scale beyond me. I needed a system.
Claude Code calls these reusable prompts skills—other platforms have the same idea under names like plugins or extensions.
First Experiment: /docs-review
- A reusable prompt that runs my writing through a consistent set of criteria before I commit it.
- Nothing fancy; I just wanted a reliable bar that didn’t depend on my mood or how much coffee I’d had.
Then it occurred to me: every PR to our docs repo should get this automatically. So I wired it into our CI/CD pipeline.
Meagan, my manager, loved it — and after a few weeks she noticed that PR quality had improved dramatically. On almost every PR, contributors were now spontaneously pushing an “Addressing feedback” commit right after the automated review posts — catching and fixing issues before I ever saw the PR.
That’s when something clicked: I wasn’t writing prompts anymore. I was writing modules — reusable, composable pieces of my own expertise.
Centralising Context
The insight was straightforward, but it changed how I thought about the whole system:
- If multiple skills need the same context — our style guide, review criteria, content standards — that context should live in one place and get consumed by everything that needs it.
- Think of it as a shared library in a software project.
I created a REVIEW-CRITERIA.md file as the single source of truth for what a “good” docs PR review looks like at Pulumi. Every skill that does any kind of review pulls from it. Change it once, and everything gets smarter at once.
Likewise with our style guide, Hugo conventions, navigation structure—all live in central reference files that any skill can pull from.
Why This Matters
- Token efficiency: Duplicating context across skills bloats token usage fast. Modularising keeps it lean.
- Cost: Your CI/CD pipeline doesn’t care about elegance, but it definitely cares about cost.
The mental model I kept coming back to: Don’t Repeat Yourself. It’s the same principle that makes good software maintainable. It turns out it makes good AI workflows maintainable too.
The Skill Catalog
From there, the system grew organically. Whenever I found myself doing something more than once, I asked: “Can I turn this into a skill?”
Below is a sampling of what that produced:
| Skill | Description |
|---|---|
/fix-issue | Takes a GitHub issue and recommends a concrete plan of attack, turning “here’s a ticket” into “here’s what I’m doing” without the spin‑up tax. |
/shipit | Runs pre‑commit checks, writes a focused commit message, and drafts a PR description. |
/pr-review | Full doc review on a PR branch: style guide, code examples, screenshots, optional test deployment, then an Approve / Merge / Request Changes dialog with a drafted comment. |
/slack-to-issue | Converts #docs Slack conversations into properly formed GitHub issues. Slack is where decisions happen; issues are where work gets tracked. |
/glow-up | Runs an older doc through the modern style guide and flags outdated screenshots, for … |
(The list continues as new needs arise.)
Takeaways
- Package knowledge into reusable, version‑controlled assets (skills, markdown reference files).
- Automate wherever possible—hook skills into CI/CD to enforce consistency at scale.
- Centralise shared context to keep token usage low and maintenance simple.
- Treat AI prompts as code: modular, testable, and reusable.
By turning ad‑hoc prompts into a library of skills, I turned a single‑person bottleneck into a sustainable, scalable docs workflow that anyone at Pulumi can tap into.
Digging Out of Accumulated Technical Debt
/new-docand/new‑blog‑post– guide anyone through adding a new document or blog post with the right location, metadata, and navigation wiring. Engineers, marketers, whoever. The barrier to contributing just dropped significantly./docs‑tools– helps other repo users discover that any of this exists. Discoverability is a real problem with internal tooling.
Slack Integration
Slack’s built‑in Claude integration isn’t the same Claude running your Claude Code workflows — they don’t share context or custom instructions. If you want consistent criteria across both surfaces, you need to bring your own backend. That’s exactly what /slack‑to‑issue handles.
Community Contributions
Other people started contributing skills to the repo — not because I asked, but because the pattern was legible enough to extend.
- Someone built a skill for SEO analysis.
- Marketing added their own review criteria.
- Engineers contributed workflows I never would have thought to build.
The thing I’d built as a personal survival tool had become a shared platform. That happened because I treated the prompts like code: modular, reusable, documented, open for contribution.
Honest Limitations
- Not a replacement for human judgment. These are probabilistic tools — they’re right most of the time, not all of the time.
/pr‑reviewdoesn’t approve PRs autonomously. It highlights things and then asks me, the human, to read them and make the call. The AI does the first pass; I do the last one. - The system isn’t finished, either. It’s probably never finished. I’m still tweaking review criteria, still finding edge cases where a skill produces something weird, still adding new tools as new pain points emerge. Treating prompts like code means treating them like software: you ship, you iterate, you maintain. There’s no version 1.0 and done.
- ADHD is real, but it’s not magic. There are still days where the paralysis wins. AI lowers the activation energy for starting; it doesn’t eliminate it. I’m still the one who has to show up. I could automate that too, but then we’d be in a whole different kind of dystopia.
Lessons to Share
-
Know your models and their costs.
- At Pulumi we primarily use Claude, and I work in Claude Code; for most tasks I reach for Sonnet rather than Opus.
- Opus is excellent, but it’s significantly more expensive, and well‑crafted instructions to Sonnet handle the vast majority of my work just as effectively.
-
Treat it like a coworker.
- Don’t just issue commands and wait for output. Ask what it thinks. Push back when it’s wrong. Explain your reasoning.
- The more you engage conversationally, the better the results tend to be.
- Alignment matters: before diving into a complex task, talk through the approach first. A few minutes of alignment up front beats iterating on a misunderstood spec.
-
Add personality where helpful.
- I’ve added personal instructions to my config — things like playing along when I’m pretending to be Captain Picard, or using colorful language when the context calls for it. (Yes, those are literal config settings.)
- It sounds frivolous, but a tool you actually enjoy using is a tool you’ll reach for instead of avoid.
-
Modularize your workflow.
- Don’t write one giant monolithic prompt that tries to do everything.
- Break it into focused skills that do one thing well and share common context through a central reference file. Easier to maintain, easier to debug, cheaper to run.
-
Version‑control your prompts.
- Your skills are code. Treat them like code. Commit them, review them, iterate on them.
- If a skill starts producing weird output after a tweak, you’ll want to know what changed.
-
Think about token burn rate.
- This matters most when running automation in CI/CD.
- Keep your skills focused — a skill that checks style doesn’t need to load your Hugo navigation conventions. The model only reads what you give it, so give it only what it needs.
-
Not everything needs to be a prompt.
- Skills can include scripts, and that’s often the right call.
- Example: when my team moves a doc in the repo, it needs to happen via
git mvto preserve history, and we need to add a redirect alias to the front‑matter to prevent 404s and protect SEO. That’s a solved problem, so it’s a script. The skill just knows the script exists and what it does. Claude orchestrates; the script executes.
-
Not everything needs to be generative.
- If you need deterministic output, don’t use probabilistic tools.
- We have a skill that generates the meta image for blog posts — procedurally, not generatively. No AI‑generated imagery. The skill follows our visual standards programmatically and produces something consistent every time.
What’s Next
The next frontier is bringing some of this tooling to the less‑technical members of the team — marketing, in particular. The skills I’ve built assume a certain comfort level with terminals and repos. That’s fine for engineers; it’s a barrier for everyone else. A friendly interface would lower that bar significantly — that’s the direction I’m currently exploring.
If you’re a technical writer, a developer advocate, or a solo practitioner figuring out how AI fits into your workflow, the approach described here is a solid starting point.
- The tools matter, but the mental model matters more: treat your prompts like code.
- Make them reusable.
- Document them.
- Share them.
Our docs repo is public, so the skills are there for anyone who wants them. If you’re building something similar, steal freely — or contribute back.
The blank page is still there. It’s just a lot less intimidating when you’ve got a good collaborator.