AI output often sounds like it was written by a committee. Even with good prompts and style guides, there’s a gap between what Claude produces and what I’d actually write.
So I built a Claude Code skill that systematically closes that gap.
The Problem
My content was readable but not recognisable. Readers who know me could tell something was off—the voice drifted, the rhythm was too uniform, the hedging was excessive.
Editing manually worked, but I was making the same fixes repeatedly: cutting throat-clearing intros, breaking up same-length sentences, replacing vague claims with specific examples.
The 6-Pass Workflow
The skill applies six editing passes to any content:
| Pass | Focus | What It Does |
|---|---|---|
| 1. Structure | Organisation | Fix openings, flow, conclusions |
| 2. Voice | Profile matching | Align with my VOICE.md characteristics |
| 3. Slop | AI tells | Remove generic phrases and patterns |
| 4. Specificity | Depth | Replace generic with specific |
| 5. Rhythm | Flow | Read-aloud test, sentence variation |
| 6. Authenticity | Final check | ”Would readers recognise this as mine?” |
Each pass has a specific focus. The skill doesn’t try to do everything at once—it builds quality through accumulated refinement.
The 30-40% Rule
Even with a perfect style guide, expect to edit 30-40% of AI output. This isn’t a bug—it’s the sweet spot.
- Editing >50%: Your style guide or prompts need work
- Editing <20%: You’re accepting too much generic content
- 30-40%: Significant time savings while maintaining voice
The skill tracks edit percentage and reports it. Over time, patterns emerge about what consistently needs fixing.
Get It
It works best paired with the voice-analyzer skill to create the VOICE.md profile it references.
The workflow I use now: draft → slop-detector → voice-editor → slop-detector → final. Each step has one job. The result sounds like me.