I Shipped Broken Code and Wrote an Article About It

Published: (March 4, 2026 at 06:13 AM EST)
6 min read
Source: Dev.to

Source: Dev.to

What AI Actually Changes in Software Development

(Part of a series – see previous pieces: “The Gatekeeping Panic”, “The Meter Was Always Running”, “Who Said What to Whom”)


The Original Idea

On February 9 I published an article about a system I built to solve knowledge collapse in developer communities.
The project – The Foundation – was a side‑project built in public, iterated in public, and (unsurprisingly) included a few mistakes.

The “Clipboard Scraping” Solution

  1. Workflow – Select everything (Ctrl+A), copy (Ctrl+C).
  2. Extension – Intercepts the copy event, parses the text, and stores it.
// Listen for copy events
document.addEventListener('copy', async (e) => {
  const copiedText = window.getSelection().toString();
  // Parse timestamps to detect Claude vs. user…
});

Claims: clean, intuitive, logical.

Reality: it only captured user messages. Every AI response, every artifact, and anything outside the visible viewport was missed.


Why It Failed

  • Invisible flaw at review depth – I reviewed the extension at the same speed I built it, so the problem never surfaced.
  • Claude.ai is a React SPA with virtual scrolling – the conversation data lives in JavaScript state, not in the DOM. Ctrl+A never selects what isn’t rendered.

“I didn’t know any of that when I shipped it. I reviewed it at the same speed I built it.”

The approach felt like a shortcut, but it was merely a reasonable‑looking solution.


The First Public Release

  1. HTML import – Save a Claude conversation as HTML, run a CLI command, and you get a searchable file in under two seconds.
  2. Documented the workflow, shipped it to GitHub, and wrote the article.

That worked, so I tried to make it easier: a browser extension that required no manual save and no CLI – just normal conversation usage.


The Breakthrough

Five days after the “I built the solution” article I wrote:

“I launched The Foundation with big plans. But I underestimated the scope.”

Publicly on DEV.to (under The Foundation org), not in a private doc or GitHub issue.

Nine days later – Feb 18 – I discovered the real fix:

  • Real API capture (the same API Claude.ai itself uses).
  • Federation and passage‑level search.

The new system shared almost no capture logic with the original clipboard approach.


The Review‑Speed Problem

Two weeks after shipping the clipboard version, Olamide Olanrewaju commented on my Gatekeeping Panic piece:

“You wrote this with AI.”

No engagement with the argument, just a surface check and a conclusion.

Last week Bilgin Ibryam shared Unmesh Joshi’s piece on the learning loop and LLMs on X. Hannes Lehmann replied:

“I guess the post was written also by AI? See the dashes in every second sentence.”

Punctuation as proof. Again, a surface check leading to a conclusion.

Pattern

ActorReview MethodOutcome
MeRan the extension, watched it capture somethingMissed what it wasn’t capturing
OlamideScanned for AI signalsMissed the argument entirely
HannesCounted dashesMissed the substance

Generation got cheap; review didn’t keep up.

“We’re reviewing at generation speed and calling it due diligence.”

The panic is misdirected.


The Real Bottleneck

  • Generation has always been faster than review – that’s why we have code review, testing, pair programming.
  • AI didn’t create the gap; it widened it dramatically.

Under pressure, the natural response is to make review cheaper too:

  • Scan for AI signals instead of reading the argument.
  • Run the code and watch it capture something instead of asking what might it miss?
  • Ship the article before fully understanding the system.

The Antidote

The Foundation’s pivot after the reckoning named the antidote directly: verification case studies.

  • “AI code I rejected and why.”
  • “Times AI was confidently wrong.”

Not slower generation, but documented rejection.

Cultural Shift

  • Not detecting AI, not banning it.
  • Building the habit of owning what you ship at the depth required to actually own it.

My clipboard approach seemed solid because I had no ritual forcing me to ask: what is this not capturing? Adding that question before merge, before publish, before ship is the difference between generation speed and review depth.

“It’s not complicated. It’s just slower than we’ve decided we want to be.”


A Process That Works

Someone built a process around exactly this. Kiro calls it property‑aware code evolution:

  1. Before the agent writes a single line, define:

    • The bug condition.
    • What “fixed” actually means.
    • What was already working that must stay working.
  2. The boundary is explicit before execution starts – the scalpel exists.

Most teams aren’t using it yet.


Closing Thoughts

  • I shipped broken code confidently.
  • Wrote an article about it.
  • Got my senses back five days later.
  • Fixed it nine days after that.

All receipts are public – three articles, code on GitHub, and the exact moment of reckoning documented in real time.

That transparency is the point. Not because I’m proud of the mistake, but because the mistake is a pattern we need to recognize and break.


End of segment.

Useful. More useful than a polished “I built a thing and it worked” story that skips the nine days in between.

Olamide checked for AI signals and missed the argument. Hannes counted dashes and missed the substance. I ran the extension and missed everything it wasn’t capturing. Different surfaces. Same gap.

AI made generation cheap. We responded by making review cheap too. That’s the actual crisis — not what’s being generated, but how shallowly we’re checking it.

The fix isn’t slower AI. It’s deeper ownership. Ask what this isn’t capturing before you ship it. Read the argument before you check the punctuation. Document what you rejected, not just what you merged.

Generation will keep getting cheaper. Review won’t catch up on its own.

We have to choose to slow down.

0 views
Back to Blog

Related posts

Read more »