OP-ED - AI in Africa's context gap: a PwC partner, a Wits researcher, and a Ghanaian founder walk into the same problem

A PwC partner says African organisations' AI tools are 'pretty useless' without clean data. A Wits computer scientist explains why. And a Ghanaian AI founder is betting he can close the gap. Part 2 of 2

OP-ED - AI in Africa's context gap: a PwC partner, a Wits researcher, and a Ghanaian founder walk into the same problem
Photo by Claudio Schwarz / Unsplash

There was a moment during last week’s anchor interview with PwC South Africa partner and continental lead for their technology and data capabilities business Mark Allderman where I couldn’t help myself. He had just finished explaining how most African organisations haven’t solved basic data integration, how their AI tools are, in his words, “pretty useless and in some cases risky” without clean data.

The implication was obvious. “Well done to you, Mark,” I said. “I see what you did there. It sounds like the tension is alleviated by enlisting PwC to close the gap.” He laughed. “No, but on a serious note…”

I was being a little cheeky, but he really wasn’t selling as much as sharing what struck me as a fairly compelling articulation of both a gap that firms desperately need filling and one his business is actively serving into. That articulation has been rattling around my head alongside a couple of other conversations that, between them, point somewhere worth exploring.


OP-ED - The US$140-million (R2.4-billion) reality check
PwC’s latest Africa Cloud Business Survey says 98% of African organisations plan to expand their cloud architecture and 37% are implementing agentic AI. A sit-down with the man behind the numbers unearthed qualifications more revealing than the statistics.

I attended an interactive lecture recently hosted by Future in the Humanities (FITH) at Wits University, where computer scientist Devon Jarvis, a lecturer at Wits and director of the Cognition, Adaptation and Learning (CAandL) Lab, was presenting striking research findings on the implicit biases of large language models. One point he made has stuck with me.

LLMs are built on what linguists call a writer-responsible foundation. The system needs you to provide all the context. Unlike a human colleague who shares your office, your country, your industry, your Monday morning, the model knows nothing about you unless you tell it.

Jarvis put it plainly: “If you want to get a good response out of [Chat]GPT, you have to describe far more of your reality than you would if I was just trying to get a good, helpful response out of any one of you.”

That reframes a lot of what Allderman told me as shared in Part 1. That is, what he dubbed the valley of despair — the tools being useless without proper data, the 57% of organisations whose AI tools lack full access to their own documents. Through Jarvis’s lens, context is a major bottleneck.

African enterprises are trying to extract value from systems that demand a quality of input most of them cannot yet provide. Their data is messy. Their processes are poorly mapped. Their people are still learning how to articulate what they need in a way the system can act on. And prompting, as Jarvis pointed out, is an emerging skill that favours affluence, English fluency, and people who already understand their own domain well enough to know what to ask for.

As per Allderman’s slick sell, this is, in effect, what consultancies provide: context and industry knowledge. It’s the translation between what the business needs and what the technology requires. When Allderman described PwC helping organisations understand their processes, clean their data, architect their systems, he was describing context provision at relative scale. No doubt, in this moment in Africa’s digital transformation à la AI trajectory, that service is genuinely necessary.

And yet the same AI tools that create the context gap are also, gradually, closing it. Allderman described this candidly. He talked about a tool called Eraser that his architects now use. “You literally sit there and say, I want an application architecture for a large bank covering da-da-da, you prompt it a little bit, go through a few things, [and] it builds out the architecture for you.” Then he caught himself. “We would have had staff that would have sat there and agonised around lines, [pondering] where the system flows and what it talks to? That’s a learning experience. And now it’s, hang on, I just did it myself. Why do I need that team of five people?”

I pointed out that it's likely “bye-bye” to revenue from someone’s P&L within his own business. After all, his team can now deliver a data platform in months where it used to take a year and a half, and Allderman says they’re passing the efficiency to clients. 

“At the moment, we’re giving organisations the benefit of the fact that we can do stuff a lot quicker,” he said. Whether PwC should charge more for faster delivery, or price for value rather than hours, “that hasn’t happened yet” apparently.



I have some personal experience of this tension. Several years ago, long before AI was a business trend, African Tech Roundup was roped into a business school corporate education programme serving one of Africa’s largest banks, retraining all 150 of their IT team leads into ‘practice leads’. We later discovered the process was, in part, a cull that surfaced who was capable of strategic thinking alongside their technical ability. (If I recall correctly, about a third didn’t make the cut.) The question AI has made urgent was already there: can you provide turn-key context and add strategic value, or can you only execute instructions that may well be on track to being outsourced to an AI tool or agent?

I should disclose my own position here. I cover these shifts as a journalist. I also navigate them as a consultant who uses AI tools daily to deliver services and support organisations through change. Sitting in both seats is humbling and, I’ll admit, exciting in equal measure. My stack of skills, experience, expertise, resources and connectivity, augmented by AI as a tool (and I do mean a tool, not the unhelpful humanised imaginary of AI as a sentient entity actively trying to replace me), is more compelling now than before the wave hit. I’m fairly confident of that. How it all plays out, for me, for PwC, for the respective stakeholders we serve, is a different question.

AI startup founder Isaac Manu Sarfo, whose startup Papermap I’ve written about previously, is navigating the same uncertainty from the product side. In a recent essay, he cited an estimate from Ben Stancil, co-founder of Silicon Valley analytics platform Mode, that 75% of employees in a typical large enterprise are data practitioners in some form, yet the entire data ecosystem has been built for the small minority of specialists. The operators, managers and executives who actually run the business are left filing tickets and waiting for answers from their own data. 

Sarfo’s bet is that the translation layer between the person with the question and the system with the answer is where the real value sits. Whether that bet pays off remains to be seen.


OP-ED - Cue ominous film score: is the AI agent apocalypse upon us?
Last week, an AI startup watched its entire value proposition ship as a native feature. This is the story of what that moment reveals — and what some are betting it doesn’t.

I’ll take a small credit here. Sarfo says the feedback that reoriented his pitch came from a conversation with me. I told him the “data democratisation” manifesto is a dead end. Real businesses care about the pay stub: a simple, unglamorous utility that solves a bleeding-neck problem and shows undeniable value to every stakeholder who looks at it. Until a use case is specific enough that the person across the table can feel it, I’m fairly certain that the abstraction will not land (gracefully or at all).

Jarvis, Allderman and Sarfo are describing the same gap from different vantage points. The distance between what these systems need to perform and what most people and most African enterprises can currently give them. 

I’ve argued in this column’s AI disruption series that every major technology deployment that ignored the social system alongside the technical one produced the opposite of its intended outcome. The British coal mines told us that in 1951. The context gap may well be this generation’s version of the same lesson. The technology works. But the systems around it, the data, the skills, the commercial models, have not been redesigned to receive it… at least not quite yet.


OP-ED - The mine, the machine, and the intern
From post-WW2 coal mines to modern codebases, history suggests AI transformation will rise or fall not on technical capability alone, but on whether organisations deliberately redesign the social systems that surround the machine.

Who closes that gap, at what cost, and whether the value accrues to the enterprise or the intermediary, are questions none of us can yet answer (honestly) with certainty. PwC et al are working on it from the consulting side. Startups like Papermap from the product side. Researchers and digital inclusion advocates like Jarvis advancing data-led lobbies. I’m working on it from both the journalism and the advisory side…

*Deep breath*

We shall see…

This is Part 2 of a two-part series. Read Part 1 here.

Editorial Note: A version of this opinion editorial was first published by Business Report on 07 April 2026.