AI output often sounds like it was written by a committee. Even with good prompts and style guides, there’s a gap between what Claude produces and what I’d actually write.

So I built a Claude Code skill that systematically closes that gap.

The Problem

My content was readable but not recognisable. Readers who know me could tell something was off—the voice drifted, the rhythm was too uniform, the hedging was excessive.

Editing manually worked, but I was making the same fixes repeatedly: cutting throat-clearing intros, breaking up same-length sentences, replacing vague claims with specific examples.

The 6-Pass Workflow

The skill applies six editing passes to any content:

PassFocusWhat It Does
1. StructureOrganisationFix openings, flow, conclusions
2. VoiceProfile matchingAlign with my VOICE.md characteristics
3. SlopAI tellsRemove generic phrases and patterns
4. SpecificityDepthReplace generic with specific
5. RhythmFlowRead-aloud test, sentence variation
6. AuthenticityFinal check”Would readers recognise this as mine?”

Each pass has a specific focus. The skill doesn’t try to do everything at once—it builds quality through accumulated refinement.

The 30-40% Rule

Even with a perfect style guide, expect to edit 30-40% of AI output. This isn’t a bug—it’s the sweet spot.

  • Editing >50%: Your style guide or prompts need work
  • Editing <20%: You’re accepting too much generic content
  • 30-40%: Significant time savings while maintaining voice

The skill tracks edit percentage and reports it. Over time, patterns emerge about what consistently needs fixing.

Get It

View on GitHub

It works best paired with the voice-analyzer skill to create the VOICE.md profile it references.

Voice Analyzer on GitHub

The workflow I use now: draft → slop-detector → voice-editor → slop-detector → final. Each step has one job. The result sounds like me.