Back to Blog
April 9, 2026

The Paradox of AI Adoption

Developers continue using tools they don't trust because the alternative feels worse than the problem.

By Sachin Kundu | Read on Substack

The numbers don’t make sense. 84% of developers use AI coding tools, but only 29% trust them. The AI-generated code contains 1.7 times more bugs than human-written code. Developers spend nearly a quarter of their workweek verifying AI output. And 96% of developers don’t trust AI code correctness.

So why do they keep using these tools?

The conventional explanation is that AI tools provide net benefits despite their flaws that developers are making a rational trade-off between speed and quality. But the evidence suggests something stranger.

This isn’t adoption driven by utility. It’s adoption driven by fear.

Most people assume technology adoption follows a simple pattern: tools get better, people notice, people adopt. But that’s not what’s happening here. AI coding tools demonstrably make most developers less productive, not more. The verification overhead is real. The bugs are real. The productivity promises haven’t materialized.

What’s actually driving adoption is the fear of being left behind.

Thanks for reading! Subscribe for free to receive new posts and support my work.

Think about what it means to be a developer who isn’t using AI tools when 84% of your peers are. You’re not just choosing a different workflow. You’re choosing to appear ignorant of the future. You’re the person still writing code “the old way” while everyone else is experimenting with “the new way.” Even if the new way doesn’t work very well yet.

This creates a perverse dynamic. The worse AI tools work, the more pressure there is to use them. Because if they worked well, you could wait until they matured. But if they might suddenly get much better, you need to start learning now, while you still have time.

I see this constantly in the work we do at Voxdez. Teams adopt AI coding tools not because they solve immediate problems, but because they’re afraid of falling behind on whatever these tools might become. The adoption decision isn’t “Does this make me more productive today?” It’s “Can I afford not to understand this?”

The verification overhead isn’t seen as evidence that the tools don’t work. It’s seen as the price of learning. The bugs aren’t proof that AI is unreliable. They’re proof that you need to get better at prompting, at verification, at working with AI systems. The problem isn’t the tool, it’s your skill with the tool.

This reframes everything.

Developers aren’t adopting AI coding tools despite the fact that they don’t work very well. They’re adopting them because they don’t work very well yet.

The current problems feel temporary. The risk of not understanding AI feels permanent.

But there’s another layer. AI tools often produce plausible-looking but faulty code. This isn’t just a technical problem, it’s a psychological one. When something looks right, it’s harder to see why you shouldn’t trust it. When the output is obviously wrong, you discard it immediately. When it’s subtly wrong, you start to doubt your own judgment.

The plausibility creates a kind of learned helplessness. You know the code might be wrong, but you can’t always tell where. So you either have to verify everything (which defeats the purpose) or trust things you know you shouldn’t trust (which creates anxiety). Neither feels sustainable, but stopping feels worse.

This connects to something I wrote about before in The Cognitive Load You Didn’t Trade Away. The mental effort doesn’t disappear when you start using AI tools, it changes shape. Instead of thinking about implementation, you’re thinking about verification. Instead of debugging your own mistakes, you’re debugging someone else’s. The load shifts, but it doesn’t necessarily decrease.

What’s particularly insidious is that the adoption creates its own momentum. Once 84% of developers are using AI tools, not using them becomes a contrarian position that requires justification. The default switches from “Why should I use this?” to “Why am I not using this?”

And defaults are powerful.

The real question isn’t why developers adopt tools they don’t trust. It’s whether this adoption pattern can persist indefinitely. Can you have an entire industry built on tools that most practitioners believe are unreliable?

Probably not. What’s likely happening is that we’re in a transitional period where the tools are improving just fast enough to maintain hope, but not fast enough to resolve the fundamental trust issues. Developers are betting that their current pain will be temporar. That the verification overhead will decrease, that the quality will improve, that the productivity gains will eventually materialize.

They might be right. But they might also be wrong. And the cost of being wrong isn’t just wasted time, it’s an entire generation of developers who learned to code in collaboration with systems they fundamentally don’t trust. That’s a different kind of programming culture than the one we’ve had before.

The paradox of AI adoption isn’t really about AI. It’s about how people respond to uncertainty about the future. When you can’t predict whether something will become essential, the safest strategy is to assume it will. Even when the current evidence suggests it won’t.


References