Artificial Intelligence has rapidly entered the creative world—and with it, a storm of debate. As someone who’s personally used AI in my writing, and who has faced public scrutiny for it, I want to speak plainly about where I stand now, and the lessons I’ve had to learn the hard way.
AI Helped Me Speak, But It Didn’t Make Me Heard
I first turned to AI out of necessity. I have dyslexia, and for someone like me, the idea of a tool that could help with grammar, structure, and continuity felt like freedom. It meant I could focus on the emotional truth of what I was trying to say without constantly second-guessing sentence structure.
And in many ways, it worked.
AI helped me draft ideas, catch errors, and get past the fear of the blank page.
But what I didn’t fully understand back then—especially during the build-up to Willy’s Chocolate Experience—was just how much responsibility comes with using tools like this.
Because when the backlash came, and the spotlight hit, one question kept echoing:
“Did a human write this?”
The Tools Can’t Be the Voice
Let me be clear:
AI can be useful—incredibly useful. It can brainstorm, explore, summarise, and fix.
But when it starts doing too much—when it crosses the line from helping to speaking for you—the result can be a disaster.
Here’s what I’ve learned:
-
AI doesn’t carry emotion.
-
It doesn’t understand shame, or grief, or nuance.
-
It can structure a sentence—but it can’t own it.
During the aftermath of the event, I had to confront this hard truth:
If you use AI in your process, the public still holds you accountable for what’s said, felt, and promised.
And they’re right to.
The Real Ethical Questions
Three issues kept me up at night. And I want to face them head-on.
1. Authenticity
People know when something’s hollow.
They don’t want machine-plotted paragraphs. They want you. Your flaws, your fire, your edge.
When a piece of writing lacks heart, readers feel it. And that’s what makes the difference between something that connects—and something that collapses.
2. Responsibility
If AI writes it, who owns it?
If it makes a mistake, who takes the hit?
That answer is simple: you do.
In my case, I did. Even when AI had a hand in descriptions or copy, my name was on it.
So I stood in the fire and took the weight.
And I’ll never outsource that weight again.
3. Transparency
People aren’t just reading what you say.
They’re asking who really said it.
And in hindsight, I didn’t make that clear enough.
I didn’t disclose where AI was used, how, or why. And that lack of clarity—however unintentional—created a trust gap.
Moving forward, I’ll never make that mistake again.
Where I Stand Now: Human First
I still use AI. But it no longer leads.
It supports. It sharpens. It helps me get the scaffolding up.
But the voice?
The soul?
The intention?
That’s all mine.
Here’s what that looks like in practice:
-
Brainstorming & Concepting: I use AI to bounce ideas. But the real choices? I make those.
-
Grammar & Dyslexia Support: I’ll continue using AI to help me polish sentences. But the heart of the message is always mine.
-
Research & Speed: I fact-check everything. AI makes research quicker, but never replaces human verification.
-
Creative Integrity: If AI plays any major role in content creation, I’ll say so. Not because I’m afraid of being questioned—but because I respect the reader.
Final Word
I’ve learned these lessons publicly. Sometimes painfully.
But I’m grateful for them.
AI is not the enemy.
But it’s not the artist, either.
It’s a paintbrush—not a painter.
And when the work has my name on it, I promise—
it will come from my voice,
my experience,
and my humanity.
Because I didn’t survive everything I’ve been through
just to let a machine tell my story for me.