OP-ED - AI tools are taking us somewhere no human has ever been. Just not in your voice.

A recent Google DeepMind paper and a festival-shortlisted short film surface two persistent anxieties in my journalism practice.

OP-ED - AI tools are taking us somewhere no human has ever been. Just not in your voice.
Photo by Bishwajit Ghose / Unsplash

Last week a Silicon Valley startup called Stan launched Stanley for X — billed as an “AI Head of Content for Twitter”, promising the fastest route to your first 10,000 followers. It debuted at number two on Product Hunt, which is a meaningful signal in that world. 

I left a comment on LinkedIn. Something about a low-frequency anxiety I carry about building your practice on infrastructure you don't own (tools, platforms, algorithms) and what might happen if and when it gets pulled. Co-founder Vitalii Dodonov replied with: 'Stanley is a co-pilot, not a replacement for your voice. The goal is to help you find your groove so that even if the platforms change, your ability to create high-value content stays with you.'

Thoughtful enough, I guess. No reason to doubt Dodonov’s sincerity or the virtue of his intent, but I'm not sure I completely buy the premise.



A new paper from Google DeepMind researchers makes a finding that complicates things somewhat. Apparently, when people use LLMs extensively to write, their work shifts toward a neutral argumentative stance roughly 69% more often than people writing without AI access. They report feeling their work is less creative, less in their own voice. And yet, strangely, respondents surveyed report similar levels of satisfaction with the result.

They were quite literally satisfied with losing their voice. That is, assuming they’d noticed anything shift to begin with.

The shift toward somewhere unreal

The paper revealed that when an LLM was explicitly asked not to write for someone, but only to edit work already produced, the findings held. LLMs prompted only to fix grammar still altered the conclusions of essays. They didn’t just smooth style and reorient arguments towards some averaged human voice. They took things toward a region of semantic space, as the researchers put it, where no human-written essay has ever been.

The homogenisation (or flattening, as I think of it) leans into something that reads as impossibly fluent, coherent, well-structured… something that no one actually wrote.

I think about this as I'm editing my own work. I think about it as I'm working with contributors to African Tech Roundup and Future in the Humanities — platforms built on the premise that singular insight, lived context, and genuine disciplinary depth can't be replicated. Recent iterations of editorial guidelines informing my journalism are, in a way, codified instructions to protect what makes a writer's contribution irreducible before they hand anything to the machine, and, indeed, to safeguard them from undue interference when AI tools are used to process content.


OP-ED - When AI comes for your craft: a personal reckoning in Africa’s creative scene
AI is disrupting African creative work—and it’s personal. Andile Masuku confronts hard truths about his own voiceover career and what comes next for creatives all over the continent.

Been here before

One of the privileges of having a journalism track record that spans a decade plus is having a reference for what my writing looked like before AI. I’m slightly embarrassed to admit how often I return to what’s effectively the most read piece of my writing ever published by a mainstream outlet to check in with myself: this article penned in July 2019 for the BBC about Facebook's proposed Libra currency and what it might mean for Africa. That op-ed, written in an entirely different context, raised the question about whether it would be wise to become too dependent on infrastructure (and currency) we don't control. 

I distinctly recall being assigned a strict sub-editor for that work, in light of the big brand reputational sensitivities of the subject that warranted getting the facts tight and ensuring that my independent deductions were stress-tested for logic and clarity. 

I honestly can't recall their name, but I remember the edit. They pushed back on things. A lot. And they made it better in ways I wouldn't have reached on my own. 

With enough deliberate effort, there’s a version of that collaborative value one might extract from using Claude as an editing tool, and it’s clearly a similar kind that Stan claims to deliver through Stanley.


Will Facebook’s digital money Libra be good for Africa?
The crypto-currency Libra could have profound implications for a continent and its remittances.

Lately though, I find myself needing to remind myself and others of what I, we, actually bring to this work. Discernment. Taste. Lived context. First-hand observation and the kind of sensemaking that comes from spending years at the intersection of African technology ecosystems and global digital discourse. Those are not things the tool can replicate. They're also not things that automatically survive contact with it.

The rise of ‘interest media’

There's another thing happening alongside all of this. The algorithmic shift on major platforms — from social media to what some are calling interest media, driven by AI recommendation — is real. And I have many, many complicated feelings about it. 

Something is lost when the proxy mechanisms for genuine human relationship and network-building give way to content marketing at scale. That is, a world led by the personal brand as publisher and everything optimised for reach and (sometimes dubious) digital engagement.

And yet, admittedly, the democratisation of thought-leadership that used to be the exclusive foray of big brands and legacy publishers with the means to dominate public discourse… well, that's also real. It’d be dishonest not to acknowledge both. I just think the Stanley pitch and, dare I say, Anthropic's pitch for Claude sit precariously to one side of that tension.



When the signal flatlines

Meanwhile, I've spent the past week or so judging entries for SmartPhilm Festival, which opens at CRAFT Addis on April 30th. It’s a competition for short films made on smartphones, which is both the constraint and the creative premise. I'm gutted that I won’t be able to be in Addis to watch them on the big screen with a live audience. One entry keeps surfacing in my thinking regardless.

Spoiler alert. Signal, written and directed by Sagi Sree Hari Varma, is set in a world where WiFi has become as vital as oxygen. Then the signal vanishes. A young girl navigates the resulting disorientation — a world that has, in the film's own description, 'simply stopped breathing.' Given its production values, I wouldn’t be shocked to learn that Varma and his team had more than a stock camera app to work with. But that's part of what makes the work singular: the smartphone constraint and the film's ambition sit in productive tension with each other, which might be a handy description of what we're all trying to do right now with AI tools.

For me, the film's WiFi conceit maps neatly onto my niggling concerns about unhealthy employment of, or over-reliance on, AI. It's about what happens when infrastructure you've come to depend on invisibly, imperceptibly, and over time (without discernment) completely, is no longer there. 


The poster for Signal (2025), a short film by Sagi Sree Hari Varma, selected for competition at the SmartPhilm Festival during CRAFT Addis (April 30 – May 2, 2026). Source

I don't think the answer is to turn our noses up at the tools. I reckon it's more about knowing exactly what you're carrying that the tool cannot carry for you, and being militant about protecting it.

You can’t use Claude to ghostwrite yourself into thought-leadership any more than you can use Stanley to find a groove you haven't dug yourself. Facts. Bottom line: if the signal flatlines — if the platform shifts, the tool disappears, the algorithm changes again — what remains had better be you and yours.

Editorial Note: A version of this opinion editorial was first published by Business Report on 28 April 2026.