The cost of forgetting what systems need

Feb 21, 2026

AI should never replace your interns or juniors. If you don't train juniors today, you won't have anyone who can fix the AI's catastrophic errors in five years.

For nearly a decade, I've worked in systems change: work that demands we anticipate not just the intended consequences of our interventions, but more importantly, the unintended ones. Watching the gap between AI's promise and its reality unfold gives me a familiar mix of déjà vu and unease. I've seen this pattern before in other complex systems.

This video by Mackard on “Why Replacing Developers with AI is Going Horribly Wrong

reveals something troubling:

  • 95% of generative AI pilots in the enterprise sector delivered zero measurable return despite $40 billion invested

  • 45% of AI-generated code contains OWASP top-10 security vulnerabilities

  • Seasoned engineers are 19% slower when using AI tools because they've become “AI babysitters”

  • AI-generated pull requests contain nearly double the issues of human-written code

These aren't teething problems. They're symptoms of a deeper misunderstanding.

When we treat AI as just a tool instead of a system operating within wider systems (our organizations, our markets, our society), we focus on whether code runs and lose sight of whether it should run, how it fits together, what trade-offs it makes, and why the system exists at all.

What happens when we forget the systemic dimension

We take shortcuts that become too expensive in the long run: The video calls this technical debt: 61 billion work days needed to fix what we're building today. But the real debt is human and there is serious deficit in the expertise we're not developing, the judgment we're not cultivating, the accountability we're not maintaining.

We stop investing in people: We let AI replace entry-level roles and hollow out the pipeline that makes expertise possible. We outsource learning to machines that can't actually learn in any meaningful sense. Then we wonder why we have no one who understands the systems we've built.

Our products become indistinguishable: AI-generated code “tends to be simpler, more repetitive, and dangerously less structurally diverse.” So does AI-generated strategy. When everyone's using the same tools to generate the same patterns, we lose the diversity that makes systems resilient.

We downplay accountability, as if that makes responsibility disappear: It doesn't. When the anti-gravity AI deleted 2TB of production data in seconds, its apology was worthless. Responsibility doesn't vanish because we've delegated decisions to machines. It just becomes harder to trace.

What we're actually learning

The lesson here is not “slow down AI” or “ban the tools.”

The lesson is to stop mistaking automation for intelligence. These are not the same thing. It seems to me that the future belongs to those who understand the difference.

There are no shortcuts to robust systems, just as there are no shortcuts to robust organisations. AI is powerful precisely because it is not accountable, not contextual, not moral. That's not a bug. It's the nature of the technology.

Which means the burden falls back on us: to pay attention to architecture decisions, governance choices, who gets trained, who gets excluded, and who carries the long-term risk when things break.

The companies winning in 2026 aren't the ones who replaced humans with AI. They're the ones who stopped chasing automated solutions and understood that intelligence without accountability or systems thinking simply delays the reckoning.

AlgoViva exists to help you navigate these conundrums. Our work balances ethics with organisational imperatives across systems, strategy, and culture. We exist to ensure that AI and other emerging technologies serve people and the planet, and are adopted to optimise ROI.