Getting Into AI, With a Human in the Loop
Well, if Artificial Intelligence (AI) were a fictional soccer player —
it would 100% be Roy “fucking” Kent. Yeah — I've been rewatching some Ted Lasso.
Blunt. Relentless. Everywhere you look. And somehow, exactly what the situation calls for.
It's here, it's there, it's every-fucking-where AI.
It's here, it's there, it's every-fucking-where AI.

It's here, it's there, it's every-fucking-where AI
But let's back up.
Here's the short version of my sometimes messy AI writing journey so far. AI art is its own story — for another day.
Writing consistently has eluded me — four decades and counting. Last year was revision. Goals. Objectives. Priorities.
Now — next actions.
Writing on a routine basis — sharing the rabbit holes I fall into, what I find at the bottom, and what I learn climbing back up. The tools I'm testing. The things I'm building or breaking. The shiny tidbits worth passing on.
I'm plenty curious. Momentum, though — ugh.
Which is why I brought in a coworker.
To speed things along. To have a sounding board. An editor with opinions. Something that could deal with my shitty starts, stalls, rewinds, and overthinking without losing patience.
I brought in AI. Specifically, ChatGPT and Claude.
Let's get into it.
I write for clients — website content, punchy blurbs, blog posts.
Last year I started using AI on client work when it made sense and the client was on board. It felt inevitable. Faster. Sometimes better. Sometimes absolutely not. Either way, it took time — teaching the tool the client's voice, refining it, reining it in when it got a little too… AI.
Writing for clients is like building their websites. Focused. Contained. It has rails.
Writing for myself doesn’t.
Too many ideas. Too many directions. Too many stories I haven't gotten to yet. I'm not a fast writer. I can disappear into trying to say something "right" and surface an hour later with nothing. Notes everywhere — digital and analogue, going back 20-plus years. Laptop, desktop, shelves, notebooks, apps I've half-abandoned. It's too much. I've probably forgotten more than I can recall at this point.
So. AI (ChatGPT and Claude) as a digital coworker and editor — to get the words down and shaped into stories.
The idea: teach it my voice — the tone, the humor, the level of polish I'm after — and let it keep the wheels turning. I bring the raw material. Ideas, rabbit holes, rough drafts. It helps me shape something readable without losing what makes it mine.
For the trial run, I wrote “On Writing Here — Clear, Brief, Human” using AI — a quick scene-setter for what I’m doing here and what you can expect.
It started well.
AI trained (dialed into voice and tone) — check. Base content provided — check. A decent prompt, open enough to breathe — check. Revisions. Revisions. More revisions — check.
The middle was the sweet spot. Faster turnaround. Clean and tight. It sounded like me. It sounded like the voice I had in mind.
Then came the “fuck all” moment.
A paid AI detector confidently called it 65% AI-written. Nope. Worse — parts I’d written 100% myself, my words, my structure, got flagged as 100% AI. Meanwhile, a few lines I knew were heavily AI-assisted slid by with barely a blip.
Here comes the rabbit hole.
Other detectors. Same story, different numbers. One said 0%. Another said 100%. Most landed somewhere in the chaotic middle. Most could be tricked. Not totally unexpected — but still annoying.
The thing is, AI and I aren’t strangers. I’ve been watching these tools for years — testing, using them in real situations, arguing with them, where they're useful and where they're not. The opinions around AI are hard to avoid, too. Ethics, use, the future — news feeds, search, the general hum of the internet.
Still. The variation in results was wild. The commonality?
After hours of surfing, reading, and testing detectors, I landed on one conclusion — no. These detectors are inaccurate and unreliable. A.K.A. total bullshit. At least right now. At least the ones I tried.
Knowing these detectors were messy — and that a lot of the “AI” flagging seemed tied to patterns, punchy statements, and paced cadence — I did what any sane person would do and complained to ChatGPT.
Me: I’m sort of pissed that the parts I wrote are scanning through AI detectors as AI generated when they weren't. I completely rewrote parts but using any sort of strong cadence scans as AI written and of course the em dash.
ChatGPT: Yeah, that’s maddening — and you’re not imagining it. A lot of “AI detectors” (including the vibes Grammarly surfaces) over-index on predictability: parallel structure, punchy aphorisms, repeated sentence frames, and clean rhythm. Those are also… just normal tools of good writing. So when you write well (especially in a manifesto-ish piece), it can get flagged.
A few things that tend to trigger those scanners:
- Parallel cadence
- Short, stacked sentences
- Very even sentence lengths
- High polish, low messiness
- Em dashes — not inherently AI, but common in polished editorial tone right now
So there it was. Decent writing. Patterns to reinforce. Cadence for readability. Polish for smoothness. And my beloved em dashes for visual distinction. All of it, apparently, suspicious.
I went deeper. It's wild how many solid writing practices can register as AI-written. I'm not unpacking all of it here — if you're curious, go ask ChatGPT, DuckDuckGo, and Google.
AI seems inevitable. Especially when you work in the digital space and need to keep content creation moving forward.
So here’s my stance on AI. And how I plan to use it for writing.
Stance
AI’s useful because I’m human. I stall. I overthink. I rewrite the same paragraph six times. It helps me get unstuck, clean things up, catch grammar issues, and keep my voice while I do it. Co-writing is fine. Letting AI invent facts isn’t.
I find AI a bit heavy-handed — exhausting and exasperating in its editing. To be fair, the more it learns the voice, the better it gets… and the less exhausting the whole process feels.
Research is where I get wary. Using AI as the only tool — letting it play “subject matter expert” — is a hard no. Source info can be wrong, and relying on it creates a circular, inaccurate loop. Bad info in, bad info out. Confidence doesn't make it true.
Use
- Generate faster drafts — yes
- Revisions and rewriting — sometimes
- Improve consistency — yes
- Research — 50/50 (and only with additional verification)
- Idea collaboration — yes
- Grammar checker — yes
- Editor — yes
- “Write an article on a topic and I’ll just post it” — nope
AI is here to stay — at least for the foreseeable future. Using it to stay effective, efficient, and competitive makes sense. But keeping the human voice and thought process is the point. That’s where the authenticity lives.
A human has to be involved. To choose what matters. To shape the story. To create connection.