OP-ED: Zero Human Error? What AI’s Financial Wins Might Be Hiding
Thirty percent reduction in manual work, zero human mistakes, realised in weeks. Those were the claims from a vice president of engineering at a leading global software company I spoke with recently about their AI deployments in financial services.
Impressive? Sure. But as I told her, I'm starting to question whether deriving unprecedented cost efficiencies and reducing human error are the holy grail we think they are.
These concerns have been reinforced by a sobering warning from Meredith Whittaker, president of Signal Messenger, in an on-stage fireside chat with Canva chief evangelist Guy Kawasaki at South by Southwest (SXSW) in Austin, Texas a couple of months ago.
With global companies like Shopify, Fiverr, and Microsoft making AI usage mandatory for employees across all roles and levels, Whittaker's caution feels particularly urgent. In summary: Beware the allure of handing unprecedented control to AI agents without discernment.
Signal standard
If you're wondering why Whittaker's voice carries weight on these matters, consider what Signal represents. Unlike WhatsApp or Telegram, Signal is a nonprofit, open source messaging service that has built its entire existence around one principle: protecting user privacy, apparently without compromise.
No ads, no data harvesting, no backdoors, no shareholders demanding growth at any cost. The platform is widely endorsed and used by journalists, activists, and professionals at privacy-conscious organisations and agencies globally.
This isn't just idealistic posturing. Signal's model illuminates the profound challenges of maintaining ethical practices when commercial incentives drive technology adoption. As Whittaker put it at the SXSW conversation: "You can't afford to risk that kind of pressure in an ecosystem where profit is created via practices that are diametrically opposed to what you stand for."
I'm reminded of a mate who consults to the UK banking industry on data governance and security. Several years ago, he left WhatsApp entirely. When I asked why, he explained that when you understand how financial surveillance actually works, you make different choices about what platforms you trust.
That's the thing about privacy: once you truly grasp what's at stake, convenience starts feeling, well, less convenient.
Agent access problem
Whittaker's specific warnings about "agentic AI" deserve careful attention. During the SXSW session, she outlined what an AI agent would need to perform even basic tasks like booking a concert ticket:
"It would need access to our browser and ability to drive that. It would need our credit card information to pay for the tickets. It would need access to our calendar, everything we're doing, everyone we're meeting. It would need access to Signal to open and send that message to our friends."
Let's make this concrete. Imagine asking an AI agent to book a simple business flight. Here's what it would need:
Your calendar to check availability. Your email to access travel confirmations and loyalty programme details. Your banking information to make payments. Your location data to suggest optimal departure times. Your travel history to understand preferences. Your contact list to inform colleagues of your travel plans. Access to your company's expense management system.
That's not just convenience; that's comprehensive digital surveillance with "root permission" across your entire digital life.
Efficiency’s hidden price
This mirrors the tension I explored some weeks back regarding potential stablecoin use cases in African fintech.
Like AI agents, stablecoins promise remarkable efficiency gains: transfers for less than a cent in under two seconds, compared to traditional remittances that can cost 20% of transaction value. But both represent seductive solutions addressing immediate pain points while potentially creating longer-term dependencies on infrastructure we don't control.
The pattern is consistent. We're offered dramatic improvements to obvious problems, but the trade-offs are buried in technical complexity and user agreements nobody reads.
Stablecoins promise financial inclusion while tying emerging markets to legacy dollar-denominated systems. AI agents promise productivity while requiring unprecedented access to our most intimate data.
Corporate calculation
For business leaders, especially in Africa where digital transformation constantly promises to ‘leapfrog’ traditional infrastructure, the productivity improvements feel irresistible. When you're competing globally with limited resources, a 30% reduction in manual work isn't just attractive: it feels necessary for survival.
But here's what Whittaker's analysis suggests we should consider: What happens when these systems become indispensable, and then the terms change? Imagine a scenario where the companies providing these AI agents face pressure to monetise that data in ways that weren't disclosed initially? What if geopolitical tensions affect access to these technologies?
During a LinkedIn exchange about AI agent adoption last week, someone named Elliot Kennefick suggested that on-device processing might solve these privacy concerns. My response was candid: "I struggle to place much faith in on-device agents, especially knowing how even harmless-seeming smart devices can be exploited or weaponised."
This isn't just about where the processing happens: it's about who controls the platforms and tools we become dependent upon.
Discernment call
This isn't advocacy for rejecting technological progress. It's an invitation to approach AI adoption with the same discernment we'd apply to any other significant business decision.
When evaluating AI agents, consider not just the operational advantages, but the access you're granting and the dependencies you're creating.
Ask hard questions: What data does this system require? Who controls the underlying infrastructure? What happens if access is restricted or terms change? Are there alternative approaches that provide similar benefits with less centralised control?
Whittaker's closing insight at SXSW deserves reflection: "I think we need to be really careful. When I think about the immediate concerns, not simply the history of AI and the fact that it's predicated on this larger surveillance model, there's a real issue right now of the undermining that AI systems are poised to do to privacy and security guarantees."
The magic genie bot that promises to handle "the exigencies of life" while your "brain sits in a jar" comes with strings attached. Those strings might feel invisible when the performance benefits are flowing, but they're very real when priorities shift or control changes hands.
As African businesses navigate the promise of AI transformation, the question isn't whether these tools will deliver on their efficiency promises; they likely will. The issue is whether in our rush to optimise for today's problems, we're creating tomorrow's vulnerabilities.
Editorial Note: A version of this opinion editorial was first published by Business Report on 01 July 2025.