OP-ED - The mine, the machine, and the intern

From post-WW2 coal mines to modern codebases, history suggests AI transformation will rise or fall not on technical capability alone, but on whether organisations deliberately redesign the social systems that surround the machine.

OP-ED - The mine, the machine, and the intern
Photo by Annie Spratt / Unsplash

Last week, I made the case that the AI moment has a prediction problem: we are so busy forecasting the end state that we have stopped reading our actual horizon. That piece is here. This week, history offers a humbling parallel.


OP-ED - How solo sailors sleep
Solo sailors don’t forecast the ocean. They set an alarm and pay attention. In the age of AI disruption, that pragmatic sensibility turns out to be rarer and more valuable than anyone is admitting.

In the years following the Second World War, British coal mining underwent what its administrators considered a straightforward modernisation. The industry had been nationalised. New mechanised longwall systems replaced the old manual methods. Output would increase. Efficiency would improve. The logic was solid.

What actually happened is now a foundational case study in why airtight logic so often produces leaky results.

What the miners knew that the managers didn't

Researchers from the Tavistock Institute of Human Relations, sent to observe the transition, found something the administrators had not accounted for. 

As it happens, I’m indebted to management consultant Bruce Msimanga for resurfacing this case study over a recent family lunch with the casual authority of someone who has clearly spent many years thinking about how organisations actually work.

The old shortwall method, for all its inefficiency, had organised miners into small, self-managing composite teams. Each team controlled its own pace, its own division of labour, its own social logic. The new longwall system fragmented those teams across three shifts, replaced the informal bonds with formal job classifications, and handed coordination to management rather than leaving it with the miners themselves.

Productivity did not improve. Absenteeism rose. Militancy rose. While the technology worked exactly as designed, the system around it collapsed.

The Tavistock researchers coined a term for what had gone wrong: sociotechnical failure. The organisation had optimised one system, the technical one, while dismantling the other. In practice, the two were inseparable. Improving one while neglecting the other produced the opposite of the intended outcome.

This was 1951. On a Monday morning in February 2026, a version of the same story unfolded in financial markets.

13.2% down in one morning

When Anthropic announced that Claude Code can automate the exploration and analysis phases of COBOL modernisation, compressing what once took years of consultant-led engagement into quarters of AI-enabled work, IBM's share price dropped 13.2%, its steepest single-day decline since October 2000. 

The reaction was rational. COBOL is not a legacy curiosity. It runs the core systems of global banking, insurance and government, including the banking and insurance infrastructure of African enterprises that have been running mainframe operations for decades. Modernising a COBOL system once required armies of consultants spending years mapping workflows. Claude Code automates the exploration and analysis phases. Teams can modernise in quarters instead of years.

But the coal mine story suggests that the arrival of a more capable machine is rarely the conclusion of the matter. Rather, it’s the beginning of a deeper question: what happens to the system around it?



Permission to change

One instructive counterpoint comes from Shopify. Pragmatic Engineer newsletter writer Gergely Orosz recently revealed that Farhan Tawer, Shopify's head of engineering, distributed AI coding licences to every team with no cost limit and waited to see what happened. Apparently, most teams barely used them. One team's token consumption stood out. Tawer looked closer. There was an intern on that team.

The intern, given a two-week task, finished it in a day. It wasn’t so much exceptional brilliance at play, than the fact that there was no legacy workflow to defend. No professional identity built around a particular method. The intern just used the tool. When the rest of the team noticed, something interesting happened. They did not feel threatened. They felt curious. The intern posed no existential threat, so they started learning from the youngster instead of resisting disruption.

Tawer's response was to hire an intern for every single Shopify team. The aim wasn’t to replace senior engineers, mind. It was to give every team a non-threatening catalyst for the same curiosity. The social system didn’t have to be dismantled. It just had to be given (permissionless) room to change.



Still buying machinery

Many an organisation is doing the 2026 equivalent of longwall mechanisation: deploying technically sophisticated AI capability into social systems that have not been redesigned to receive it. The absence of productivity gains is closely measured and diagnosed as a technology problem. The response is often buying more technology and/or sweeping human culls. I reckon the Tavistock researchers would recognise the pattern in under five minutes.

IBM’s Monday wasn’t (necessarily) a verdict on a company’s future. It was, however, a sobering signal. The COBOL modernisation wave is real. But so is the question of what happens to the organisations, and the people inside them, who encounter it without seeking to answer the social system question first.

That question, it turns out, is quietly being answered by at least one company on my radar. And it began closer to home than most might expect.


This is the second of a three-part series. Part 3, What Good Navigation Looks Like, publishes next Tuesday.


Editorial Note: A version of this opinion editorial was first published by Business Report on 03 March 2026.