AI‑Assisted Writing as Search (Not Draft Generation)

Published: (December 14, 2025 at 01:41 AM EST)
5 min read
Source: Dev.to

Source: Dev.to

Introduction

I have a steady stream of ideas I genuinely want to explore, but most of them die in the same place: a few bullet points in a notebook, or an outline in a repo. In my case it literally looks like files named things like:

  • projects/personal/ideas/2025-12-10-blog-post-second-brain-experience.md
  • projects/personal/ideas/2025-12-13-ai-writing-process.md

They weren’t nothing. They were real attempts, but they rarely turned into something I could publish. Not because I had nothing to say, but because writing (for me) had become a fragile, synchronous activity:

  • I need long, uninterrupted time to polish.
  • I need peers to push back in real time.
  • I need enough confidence in English (and honestly, even in French) to feel the result was “worth reading.”

Those conditions don’t reliably exist in my life. I’m a founder with a family, and time comes in fragments. I also don’t live in a dense tech ecosystem where you get daily, high‑signal pushback by osmosis. So ideas would keep looping in my head… and I’d keep not shipping.

This post describes the workflow I built to fix that. It’s opinionated, but not performatively confident. The simple thesis is:

One useful way to think about writing is as a search problem.
AI is useful when it helps you explore the search space (multiple framings, objections, structures) before you commit.

Promise (under my constraints)

When time is fragmented and I don’t have editorial peers on tap, this workflow reliably turns “looping idea noise” into a draft I’d actually be willing to share. It does this by expanding options first, then forcing precision before polishing.

Who this is for

  • People with ideas, limited uninterrupted time, and limited high‑signal pushback.

Who this is not for

  • If you already have strong editorial peers and deep uninterrupted time.
  • If your main goal is “prettier prose” or a polished AI voice.
  • If your goal is SEO/marketing outcomes.

A quick epistemic note

I try to label important statements as experience, opinion, assumption, or verified. In this post, most claims are experience or opinion. When I say “this works,” I mean “this works for me under my constraints,” not “this is a universal method.” (Full taxonomy in Appendix A.)

The real failure mode: committing too early

If you’re busy and you have ideas, the default “one‑draft” AI workflow is tempting:

  1. Prompt a model.
  2. Get a passable draft.
  3. Edit a bit.
  4. Publish (or don’t).

My opinion: this fails in a subtle way. It collapses the space too early. If you accept the first coherent framing you see, you miss alternative theses you didn’t think to ask for, and you miss the objections you would have discovered in a real debate. The result may be fluent, but it’s often shallow or generic.

When I say “writing as search,” I mean:

  • There are many plausible ways to frame an idea.
  • Your first framing is rarely the best one.
  • The work is not producing text; it’s choosing what you actually believe.

Why “search,” specifically?

Other metaphors exist (sculpting, iteration, dialogue). I still use “search” because it emphasizes a trade‑off that matters under my constraints: backtracking is cheap before commitment.

If I haven’t spent three hours polishing Draft A, it’s easier to abandon it when Draft B reveals a better thesis.

Caveat: this isn’t “free.” It shifts the cost from writing to reading (and attention). You pay a reading tax to avoid a polishing‑the‑wrong‑draft tax.

The workflow therefore separates two modes:

  • Exploration: expand the space of possible essays.
  • Commitment: pick a framing and make it honest.

The 5‑step loop (plus an optional 4.5)

StepDescription
1. Raw dumpFuel – a messy interview with yourself.
2. Perspective expansionParallel drafts – generate full essays, not bullets.
3. SynthesisSelection + compression (curation, not averaging).
4. Human clarificationWhere truth enters – Q&A with yourself.
5. IntegrationWrite the final draft.
4.5 (optional)“What would X say?” critique – objection generation, treated cautiously.

Pipeline in one line:

Raw dump → Parallel drafts → Synthesis → Human Q&A → Draft

My experience is that Step 4 (human clarification) is the highest‑leverage part. Steps 2 and 3 make Step 4 possible.

Step 1: Raw dump (give the system real fuel)

A raw dump is not an outline or a prompt; it’s closer to a messy interview with yourself.

Experience: I often start with a voice note (typing feels like “writing,” which triggers perfectionism).

What to include

  • What happened (the event) and why it matters to you.
  • What you currently believe.
  • What you’re unsure about.
  • What you’re optimizing for (clarity, novelty, persuasion, etc.).
  • Constraints (time, audience, sensitivity).

Example from this post (grounded): my raw dump began as a process spec:

- “A technical blog post about a systematic approach to writing better technical blog posts…”
- “Step 1: Raw Information Dump”
- “Step 2: Multi‑Model Perspective Expansion”

If the raw dump is thin, everything downstream becomes generic.

Step 2: Perspective expansion (generate full drafts, not bullets)

Instead of asking a model for an outline and then writing, I now generate parallel drafts: multiple full essays from different models (or different prompts/lenses).

Why full essays?

A complete draft forces a thesis, definitions, transitions, a conclusion, and an implicit set of assumptions—things a bullet list can hide.

Experience: I typically run 3–4 models. Fewer often converge too quickly; more rarely adds genuinely new structure for the extra reading time.

Practical checklist for comparing drafts

  • Which draft makes the strongest claim (and what does it assume)?
  • Which draft surfaces the best objections?
  • Which draft has the cleanest structure (even if the content is wrong)?
  • Which draft feels most “alive” (specific constraints, real stakes)?

I treat these drafts as alternative framings, objections, and potentially reusable phrasing, not as “the answer.”

If you only have one model

Force three lenses across three passes:

  1. Skeptic: strongest objections + missing caveats.
  2. Teacher: simplest explanation + concrete examples.
  3. Editor: structure + cuts + “what should be removed?”

Step 3: Synthesis (curation, not averaging)

After the parallel drafts, I perform a collapse step. Most people imagine “merge paragraphs,” but the temptation is to average everything into one polite post, which usually produces blandness.

My opinion: synthesis should be opinionated. It’s selection + compression:

  • Extract the strongest claims.
  • Surface disagreements.
  • Identify what needs evidence.
  • Propose a narrative shape that could actually carry the post.

Example from this post (grounded): my synthesis flagged the core risk:

“The missing ingredient: a concrete event… Without a real event/case study, it reads as generic advice.”

That forced a decision: the post couldn’t just be “here I …” without grounding it in a specific experience.

Back to Blog

Related posts

Read more »