I built 90+ AI prompts because raw transcripts are useless
Source: Dev.to
The problem I kept running into
I do interviews for work, mostly marketing. The goal is usually to turn a 45‑minute conversation into something publishable—a blog post, social clips, whatever.
I’d paste the transcript into Claude or ChatGPT and say something like “turn this into a blog post.”
The output was… fine? Generic. It would summarize instead of pulling actual quotes, losing the interviewee’s voice. I’d spend an hour fixing it and think, “I could’ve just written this myself.”
The same thing happened with meeting notes. “Summarize this meeting” gave me a summary, but what I actually needed was:
- What did we decide?
- Who’s doing what?
- What’s the follow‑up?
A different problem entirely.
So I started building prompts
Not because I planned to—just because I kept tweaking the same prompts until they actually worked.
The blog‑post prompt took the longest. I needed it to:
- Keep the interviewee’s actual voice (not sanitize everything into corporate speak)
- Pull real quotes, not paraphrase everything
- Structure it like a real article, not a book report
- Lead with something interesting, not “In this interview, we discussed…”
That prompt went through probably 15 versions before it stopped annoying me.
Then I built one for meeting summaries that extracts decisions and action items separately, one for turning podcasts into social posts, and one for cleaning up speaker labels in raw transcripts. At some point I looked up and had 90+ of them.
What I learned about prompting
Early prompts were too vague. “Summarize this” doesn’t tell the model what you actually care about.
The prompts that work best are almost annoyingly specific:
- What’s the exact output format?
- What should it include vs. ignore?
- What tone? What length?
- What questions should it ask me before it starts?
That last one was a breakthrough. The best prompts don’t just run—they clarify first. Example: “Before I process this, tell me: how many speakers, what are their names, what’s the context?”
You get way better output when the model understands what it’s working with.
Some that ended up being useful
A few I keep coming back to:
- Transcript cleaner – Takes raw output with “Speaker 0” and “Speaker 1” labels and turns it into something readable with real names and proper formatting. Sounds trivial but it’s the one I use most.
- Interview → blog post – Extracts the interesting parts of a conversation and structures them into an actual article. Keeps quotes intact and writes transitions that don’t sound like AI wrote them.
- Meeting action items – Pulls out decisions, tasks, and owners from a meeting transcript, ignoring the 40 minutes of small talk to find the 5 things that actually matter.
- Podcast social package – Generates a batch of social posts from an episode transcript: quote cards, discussion questions, etc.
I also built some weird, specific ones for legal transcripts (deposition analysis, contradiction detection) that I’m not sure anyone else needs, but they exist.
Where they live now
I put them on GitHub and linked them from the transcription site. They’re organized by use case. Some have full write‑ups explaining how to use them; others are just the prompt. They work with Claude, ChatGPT, Gemini—whatever. The transcript format matters more than which model you use.
Still iterating
Some of these are solid; others I’m still not happy with. The social‑media prompts especially—getting an LLM to write something that doesn’t sound like an LLM wrote it is its own challenge.
If you’ve built prompts for processing transcripts (or any structured text), I’m curious what approaches have worked for you. The “ask clarifying questions first” pattern has been the biggest improvement for me, but I’m sure there are techniques I haven’t tried.