The Cognitive Load You Didn't Trade Away
The shift from writing code to managing AI agents doesn't reduce mental effort. It replaces the satisfying strain of building with the draining vigilance of supervision.
Most people think AI coding agents make developers’ lives easier. The pitch is straightforward: instead of writing code yourself, you supervise a fleet of AI agents that write it for you. You focus on the big picture. The tedious stuff disappears. You become, essentially, a manager.
This is at best half true. And the half that’s wrong is the important half.
Here’s what’s actually happening. Developers who adopt agentic workflows are reporting higher productivity. They ship more features, close more tickets, generate more code. But Business Insider reports that these same developers are also facing increased burnout risk. Think about that for a second. If managing agents were genuinely lighter cognitive work than writing code, burnout would be a strange outcome. You don’t burn out from doing less.
Something doesn’t add up. So let’s figure out what’s really going on.
The trade
When you write code yourself, the cognitive demand is intense but focused. You hold a problem in your head, you work through it, you produce a solution. It’s deep work in the classic sense. One problem, sustained attention, flow state if you’re lucky.
When you manage AI agents, the shape of the work changes completely. Andrej Karpathy no longer writes code. Instead he spends hours directing AI agents. Hours. The bottleneck, as he describes it, has shifted from coding skill to “human skill in effectively directing AI systems.”
That’s not a reduction in cognitive load. That’s a relocation.
You trade the deep-focus load of implementation for the distributed-attention load of orchestration. Instead of thinking hard about one thing, you’re thinking medium-hard about five things simultaneously. You’re context-switching between parallel workstreams. You’re reviewing code you didn’t write, which means holding both what you intended and what the agent produced in your head at the same time. You’re prompt-engineering. You’re catching mistakes that look plausible but aren’t.
And here’s the thing about that second kind of cognitive work: it might actually be worse.
Why verification is harder than it looks
There’s an asymmetry most people miss. Writing code and reviewing code are not symmetric tasks. When you write code, you build understanding incrementally. Each line follows from the last. When you review code someone else wrote, especially code an AI generated, you have to reconstruct the reasoning behind it. You’re reverse-engineering intent from output.
Christian Helle’s analysis of agentic engineering identifies the key pitfalls: challenges with context management, ensuring code quality, and avoiding over-reliance on AI outputs that may lack robustness. That’s a polite way of saying: the code looks right but might not be. And figuring out which parts aren’t right requires exactly the kind of deep understanding you were supposedly freed from needing.
This gets worse as you scale up. One agent producing one function? Easy to check. Five agents working on different parts of a system simultaneously? Now you’re doing architecture review, integration testing, and quality assurance on code you didn’t write, across contexts you’re rapidly switching between. The verification burden doesn’t scale linearly. It compounds.
What didn’t get offloaded
The New Stack makes a crucial distinction: agents write code, but they don’t do software engineering. Design, architecture, maintenance, trade-off reasoning, understanding how systems evolve over time, all of that stays with the human. And for experienced developers, that was always the hard part. The cognitively expensive part of building software was never typing the code. It was figuring out what code to write and how it fits into everything else.
So what exactly got offloaded? Mostly the translation of intent into syntax. Which, for senior engineers, was the easiest part of the job. You’ve taken the lightest cognitive load off the developer’s plate and replaced it with a new, different load — orchestration, verification, context management, that might be heavier.
For junior developers, the picture is even worse. They were building mental models through the act of writing code. Take that away and replace it with the task of evaluating code they don’t fully understand, and you’ve removed the learning mechanism while adding a harder evaluation task. That’s not a trade anyone should want to make.
The honest answer
People want to hear that AI agents make everything easier. The agentic IDE vision where orchestration is seamless and coordination overhead approaches zero is compelling. And maybe we’ll get there. Better tooling will help. But right now, in 2026, with nascent orchestration and immature verification systems, the honest answer is: nobody actually knows whether the net cognitive load is lower.
No one has measured it. Not with validated instruments, not with controlled studies, not with anything more rigorous than “I feel more productive” — which is a measure of throughput, not cognitive cost. You can increase throughput while also increasing strain. Ask anyone who’s ever taken on a second job.
What we do know is this: the nature of the cognitive demand has fundamentally changed. Developers are trading deep focus for distributed attention, implementation for orchestration, building for verifying. Whether that trade is favorable depends on who you are, what tools you have, and what kind of thinking exhausts you.
But the one thing you should not believe is that the thinking got easier. It just got different. And different, in this case, might be harder in ways we don’t have the vocabulary for yet.
The productivity numbers look great. The burnout numbers should make you pause.
References
AI Coding Boom Shifts Software Developers Toward Management - Business Insider
The Era of Agentic AI-First IDEs with Google Antigravity | Medium
Agents write code. They don’t do software engineering. - The New Stack
From AI-Assisted Code Completion to Agentic Software Engineering | Christian Helle’s Blog
