Why I Built My Own Humanizer (And Why You Should Too)
Source: Dev.to

The Original Humanizer
There’s a tool called humanizer, a Claude Code skill built by blader, inspired by Wikipedia’s guide to detecting AI writing. It has 4,100 stars, hundreds of forks, and an active community adding patterns and language support. If you want to strip AI tells from any text, it does that well.
Humanizer checks your writing against a generic human baseline. It knows what AI writing looks like and flags patterns such as significance inflation, copula avoidance, the rule of three, and em‑dash overuse—24 patterns derived from Wikipedia’s AI cleanup guide. Run your draft through it, find the tells, rewrite.
That works if your goal is writing that doesn’t look AI‑generated.
Why I Needed Something Different
My goal is writing that sounds like me. Those are related but not the same thing. I can write a draft that passes every Humanizer check and still sounds nothing like my published work—no AI tells, but also no voice. Sterile, voiceless prose is as detectable as slop; it just gets detected by different readers.
What I needed wasn’t a list of patterns to avoid. I needed a calibration against my own writing at its best.
Introducing Voice‑Humanizer
I built voice‑humanizer on the same foundation as blader’s tool, keeping the original 24 patterns and adding three new ones from a community PR. The key addition is a CORPUS.md file containing your own published writing. The skill extracts a voice fingerprint from this corpus before checking anything else.
The workflow is now:
- Voice check – compare the draft to your fingerprint.
- AI pattern check – run the generic Humanizer patterns.
When it flags something, it doesn’t just say “this pattern looks like AI.” It says, for example, “this reads as Claude because it uses three parallel items where your corpus shows you compress to two. Here’s what you’d likely do instead.” This provides more actionable feedback.
How It Solves False Positives
Because the fingerprint tracks both what you reach for and what you avoid, the tool can distinguish stylistic choices that look like AI tells but are actually part of your voice.
Example: My writing uses em dashes deliberately—once per piece, structurally. A generic Humanizer would flag that, but Voice‑Humanizer won’t, because the pattern appears in my corpus. The same applies to any other stylistic habit that would otherwise be a false positive.
Using Voice‑Humanizer
You can try it yourself. The repository is public:
-
GitHub:
(repository URL goes here)
CORPUS.md is git‑ignored, so your writing stays private. CORPUS.example.md shows the expected format, and SETUP.md contains five questions to help you extract your own voice fingerprint before you start.
Note: Voice‑Humanizer won’t work without a corpus. This is intentional—without a personal baseline, the tool can’t be calibrated to your voice.
Credits
Thanks to blader for the original foundation—the pattern list and skill format. Voice‑Humanizer solves a narrower problem for a specific kind of writer: someone who has written enough to know what their best work sounds like and doesn’t want AI assistance to flatten it.