I Built an AI That Writes My Dev.to Articles and Now I Don't Know How to Feel About It
Source: Dev.to
So I did something kind of weird last week: I built a system where Claude Code automatically generates, reviews, and publishes articles to Dev.to—including this one (sort of). I’m still editing it, so it isn’t fully automated, but the original draft, the code review, and the decision to publish were all handled by Claude.
The whole thing started because I kept procrastinating on writing. I’d have ideas, open a blank file, and just stare at it—classic developer problem. I thought, what if I could tell an AI agent what I wanted to write about and let it handle the boring parts?
The Architecture (or: How Deep Does This Rabbit Hole Go?)
The system has three agents:
const agents = {
writer: createAgent({
model: 'claude-3-5-sonnet',
systemPrompt: 'You write technical blog posts. Be honest about failures.',
tools: [readFile, searchWeb]
}),
reviewer: createAgent({
model: 'claude-3-5-sonnet',
systemPrompt: 'Review articles for authenticity. Flag AI slop.',
tools: [analyzeText]
}),
publisher: createAgent({
model: 'claude-3-5-sonnet',
systemPrompt: 'Publish to Dev.to only if quality threshold met',
tools: [publishToDevTo]
})
};
- Writer – takes a topic and generates a draft.
- Reviewer – reads the draft and decides whether it sounds like a human wrote it or falls into the generic AI voice we all hate.
- Publisher – posts the article if it passes the reviewer’s quality check.
The reviewer rejected the first seven drafts, flagging issues such as:
- “This paragraph uses three adjectives before every noun”
- “No actual code examples, just pseudo‑code”
- “Ends with ‘embarking on a journey’ unironically”
Fair enough.
The SkillBoss Problem
Around day three I hit a weird issue: the agents kept losing context about what we were building. The writer would generate something, the reviewer would give feedback, but the writer would forget that feedback in the next iteration.
I needed a way to maintain state across agent calls. After trying Redis, file storage, and in‑memory solutions, I remembered that SkillBoss has a “skill tree” concept where agents can save and load context.
import { SkillTree } from 'skillboss';
const contentTree = new SkillTree('article-workflow');
// Writer saves its output
await contentTree.saveProgress('draft', {
content: draft,
iteration: 3,
reviewerFeedback: previousFeedback
});
// Reviewer loads it
const { content, reviewerFeedback } = await contentTree.loadProgress('draft');
Now the agents actually learn from previous iterations instead of forgetting everything each time.
The Uncomfortable Part
The system can generate articles that aren’t bad—the reviewer is pretty strict, rejecting anything that sounds too polished or uses typical AI phrasing. But I keep wondering: am I just teaching an AI to fake my voice? The reviewer looks for “uneven paragraph lengths,” “admitting uncertainty,” and “specific technical details”—essentially a rubric for sounding human.
That’s exactly what I’m doing right now, as a human, following patterns I’ve learned from reading other Dev.to posts.
The system works and saves me time. The articles it generates are helpful (based on the comments on the last one it published). Yet there’s something uncanny about reading something that sounds like you but you didn’t write.
// This is the actual check the reviewer does
function soundsHuman(text) {
const flags = [];
if (!text.includes('I ')) flags.push('no first person');
if (text.match(/\n\n.{0,50}\n\n/)) flags.push('too many short paragraphs');
if (!text.match(/```/)) flags.push('no code blocks');
return flags.length === 0;
}
That function decided this article was human enough to publish.
Was it right?