The Silent Shipwrecks, Part 2: Emptying the Coffers
In Part 1, we explored how organisational cultures drift into mediocrity through strategic cowardice, the Bozo Explosion, and the normalisation of deviance. But there's a modern accelerant to this drift - one that feels like progress right up until the moment it reveals itself as catastrophe.
The platform team I described didn't just choose mediocrity through cultural laziness. They were seduced by a technological promise: that automation could eliminate the need for deep expertise. The low-code platform wasn't just a tool - it was a Faustian bargain (a deal with the devil!). Trade your foundational capabilities for immediate productivity gains. Outsource the hard parts. Let the system handle complexity you no longer need to understand.
It felt efficient. It looked agile. The quarterly metrics were beautiful.
Until I asked them to solve a problem the platform couldn't handle.
They'd systematically eliminated the expertise required to work around its limitations.
This isn't unique to one team or one platform. We're watching entire industries make the same bargain at scale, trading decades of accumulated institutional knowledge for the seductive promise of AI-powered efficiency. And like all Faustian bargains, the bill comes due eventually.
The Hollow Middle: When Automation Eliminates Apprenticeship
The most immediate way organisations empty their coffers is by automating away the entry-level roles that serve as training grounds for expertise.
Tools like GitHub Copilot, ChatGPT, and low-code platforms excel at exactly the tasks that have historically trained junior professionals: writing boilerplate code, generating unit tests, documenting functions, creating simple integrations. This creates a seductive economic logic: why pay humans to do what machines can do faster and cheaper?
But those "grunt work" junior roles weren't just about output.
They were the apprenticeship system through which tacit knowledge transferred.
A junior watching a senior debug a race condition wasn't just learning the solution; they were absorbing the diagnostic process, the intuition, the hard-won patterns that can't be codified in documentation. When organisations automate these roles away, they're not just cutting costs. They're severing the knowledge transfer mechanism that would create their next generation of senior experts.
The result is what researchers call the "Hollow Middle" phenomenon - a workforce shaped like an hourglass rather than a pyramid: ⏳
Top tier: Deeply experienced senior engineers who learned their craft before the automation wave, carrying decades of tacit knowledge in their heads.
Middle tier (the void): A vanishing layer of mid-level practitioners. Not because the next generation lacks capability, but because they're being systematically denied the apprenticeship that would build it.
Bottom tier: Powerful AI agents and platforms executing tasks with no human understanding of the underlying mechanisms.
As current seniors retire over the next decade or so, companies risk discovering their coffers are empty. The deep expertise required to handle novel problems, to architect resilient systems, to understand why certain "inefficient" code was actually a critical safeguard - it will have died with the last generation who learned through doing.
Cognitive Atrophy: When Skills Rust from Disuse
The problem compounds because even the seniors who remain start losing capabilities as they rely increasingly on automation to handle routine tasks.
Cognitive scientists call this "skill atrophy." When tasks are offloaded to machines, the brain ceases to expend the energy required to maintain those neural circuits. It's analogous to the "Google Effect" - humans no longer memorise facts because they're easily searchable.
In engineering, we're facing a "Google Effect for Logic."
When an engineer relies on AI to generate the logic for a loop, a database query, or an API integration, they're bypassing the mental simulation that builds deep understanding. Over time, this degrades the ability to "run code in their head" - a critical skill for debugging and architectural design.
More critically, the lack of struggle leads to fragility in resilience. Resilience engineering teaches that operators need to experience failure scenarios to learn how to recover from them. If AI agents fix all minor errors and handle all routine coding, the human operator never builds the mental library of "failure patterns" required to diagnose a major outage.
Technical Debt 2.0: Code Nobody Understands
The code these AI agents generate creates what researchers call "Technical Debt 2.0" not just messy code, but opaque code that no human fully understands. The codebase grows exponentially but degrades in quality, creating what I saw referred to as "comprehension debt" - code that works, but that nobody understands how.
When a Black Box system fails, it fails catastrophically for the user who didn't build it. They cannot "open the hood" because they don't understand the engine. They're dependent on the AI to fix itself. If the AI hallucinates a fix that introduces a new bug, they're trapped in a recursive loop of failure, lacking the first-principles skills to break out.
But here's the insidious part: as AI code generation improves and error rates drop, human reviewers become less effective. When the AI is right 99% of the time, the human brain tunes out. This is "automation bias" - the tendency to trust the machine's output over one's own judgement. When that 100th instance contains a subtle but catastrophic vulnerability i.e. a security hole, a data leak, a race condition... it slips through.
You can see the scenario, painted in front of you: AI-generated microservices in production for say 18 months experience cascading failure during peak load. The original developers have left. The remaining team never inspected the generated code. Recovery takes 72 hours instead of 4 - debugging a system nobody understood. Meanwhile the financial and reputational cost will be catastrophic!
The ballast was removed while the hull still looked intact. By the time the list became visible, the ship was already taking on water.
Institutional Amnesia: When the Charts Are Thrown Overboard
There's a crucial distinction between explicit knowledge and tacit knowledge that AI vendors conveniently elide:
Explicit knowledge: Codified information in textbooks, documentation, repositories. AI excels here - it has ingested the internet's supply of explicit knowledge.
Tacit knowledge: The "know-how" and wisdom accumulated through experience. Intuition, context-awareness, the ability to handle ambiguity. It's often unwritten, transferred only through observation and shared experience.
AI cannot capture tacit knowledge because it's not in the training data. It lives in the heads of senior engineers who are currently retiring. As organisations stop hiring juniors who would acquire this knowledge through apprenticeship, it dies with the current generation.
Here's another example: an AI agent, observing "inefficient" code, may refactor it to be "clean," inadvertently removing a safeguard that protected the system from a rare but critical failure mode. A human senior would know not to touch that code - they'd remember the outage five years ago that necessitated that "hack."
This is Chesterton's Fence violation at scale.
The AI doesn't know why the fence was built, so it tears it down for efficiency. Six months later, when the rare edge case reoccurs, the organisation discovers that the expertise to understand why that fence mattered left with the last engineer who remembered the original incident.
The ship's charts showing hidden reefs, accumulated through generations of near-misses and hard lessons, get thrown overboard because the GPS makes them seem redundant. Until the GPS fails, and the organisation discovers nobody remembers how to navigate.
The Jevons Paradox: More Software, Less Understanding
Here's the economic trap that makes this unsustainable: as AI makes coding more efficient, the total volume of software under management explodes rather than contracts.
This is Jevons Paradox - when technology increases the efficiency with which a resource is used, total consumption of that resource increases. As AI lowers the cost of generating code, demand for custom applications, integrations, and digital systems skyrockets.
The equation is simple: More code generated + Less human understanding = Systemic fragility that scales with adoption.
The trap: this explosion in software volume requires more maintenance, more security oversight, more architectural governance.
But we're simultaneously de-skilling the workforce.
We're increasing demand for high-skill oversight while decreasing its supply.
Eventually, the cost of maintaining this Frankensteinian infrastructure - fragile, bloated, understood by no one risks dwarfing the short-term savings in generation. Companies will reach into their coffers for the deep expertise required to fix it and find them empty.
Picture this: It's 2032. A zero-day exploit hits your critical systems. Eight years of AI-generated code. The seniors who understood the architecture have retired. The mid-level engineers who would have learned from them were never hired. Your team is staring at 400,000 lines of code nobody understands.
Recovery isn't measured in hours. It's measured in weeks.
The board inquiry asks: "Who understood this system?" Nobody did. You may have optimised that knowledge away in 2025.
Why Smart People Make Catastrophic Choices
The question isn't whether senior leaders understand these risks - most do. The question is why they choose short-term efficiency over long-term capability anyway.
The answer is incentive alignment.
The executive who cuts training budgets and automates away apprenticeships won't be there when the coffers run dry. Their performance is measured quarterly. Their compensation is tied to cost reduction. Their career advancement depends on showing "operational efficiency gains" on their CV before moving to the next role.
The benefits accrue immediately. The consequences arrive later. And crucially, they arrive to someone else. This is short term gain, long term pain.
This creates what economists call a "principal-agent problem." The agents (executives making decisions) don't bear the full consequences of their choices. The principals (shareholders, future leaders, employees) do.
Consultancies amplify this dynamic. The pattern plays out predictably in transformation pitches: projected "4-5X efficiency gains" through "agentic factories" e.g. one human managing 15-20 AI agents. Decks promise £200-300M in "capacity unlocked." Big numbers designed to hook big fish. 🎣
Then came the consultant pitch deck. Branded proprietary methodology. Opaque terminology designed to signal sophistication you wouldn't understand. It's designed to shut down critical thinking.
Leadership wasn't being fooled. They were being given permission. The consultancy provides political cover ("our competitors are doing this"), quantified justification (£200-300M!), and shared liability when it fails ("we followed industry best practice").
The result: rational individual decisions that aggregate into organisational suicide. Each executive makes the "sensible" choice given their incentives. Collectively, they hollow out the institution.
The executives approving this will collect bonuses for the cost savings. The consequences: the severed apprenticeships, the institutional amnesia, the empty coffers- will arrive in 5-10 years. To someone else's P&L.
This isn't a knowledge problem. It's a courage problem - and FOMO is winning. Leaders who choose capability over cost take career risk. They're betting on outcomes that won't materialise within their tenure. They're spending money on "redundant" expertise that doesn't show immediate ROI.
In short-termist cultures, that's career-limiting behaviour.
This isn't an argument against automation. AI and automation, deployed intelligently, can amplify human capability rather than replace it. The distinction is intent: Are you using these tools to augment expertise and accelerate learning - or to eliminate the need for expertise altogether? One path builds competitive moat. The other empties the coffers.
The Two Sides of the Same Coin
In Part 1, we examined how organisational cultures drift into mediocrity through human choices: strategic cowardice, the Bozo Explosion, normalisation of deviance. That drift happens through laziness, fear, and the seductive comfort of "good enough."
This - the technological acceleration, happens through what feels like its opposite: ambition, efficiency, progress.
Organisations adopt AI and automation believing they're gaining capability, not recognising they're systematically eliminating it.
But the destination is identical: a hollowed-out organisation that looks functional on spreadsheets but lacks the deep expertise to handle the storms that inevitably arrive. Mediocrity through cultural drift and mediocrity through technological dependence - two paths to the same shipwreck.
The cultural path feels like comfort. The technological path feels like innovation.
Both empty the coffers.
Both leave you helpless when the weather turns.
The Choice You're Making Right Now
If you're a senior leader reading this, here's the state of play: you're already making the choice.
Every time you approve AI adoption without investing in the foundations such as protecting apprenticeship, you're choosing.
Every time you celebrate cost savings from automation without measuring capability erosion, you're choosing.
Every time you incentivise efficiency over resilience, you're choosing.
You're choosing to empty the coffers.
Most leaders reading this will nod, agree with the diagnosis, and change nothing. Because changing course is expensive, politically risky, and won't show results within their tenure.
That's fine. Those organisations will become the Silent Shipwrecks of 2030 - case studies future leaders use to explain how a generation of executives optimised themselves into obsolescence.
But some leaders will read this and make a different choice.
They'll protect apprenticeships while adopting automation. They'll pay for "redundant" capability that doesn't show immediate ROI. They'll measure institutional knowledge as carefully as they measure quarterly costs.
They'll choose to be Storm-Tested.
Not because it's easy. Because when the weather turns, and in business, the weather always turns - they'll be the ones still sailing while everyone else is sinking.
The charts you throw overboard in calm weather are precisely the ones you'll desperately need when the storm hits.
Knowledge was never the constraint. Courage was.
When your successors conduct the post-mortem, will they find empty coffers - or preserved charts?