Back to Blog
March 26, 2026

The Quality Valley

Most developers are using AI tools they can’t effectively direct, and we’re about to find out what that costs.

By Sachin Kundu | Read on Substack

Ninety percent of developers are already using AI coding tools. That should terrify you.

Not because the tools are bad, they’re remarkably good at generating code. The problem is that most developers haven’t developed the skills to direct them effectively. We’re in the middle of a massive competence gap, and code quality is paying the price.

Think about what this means. You have an entire generation of developers who learned to code by writing code. Now they’re expected to manage AI agents that generate code for them. These are completely different skills. It’s like expecting someone who’s good at driving to immediately excel at air traffic control.

Thanks for reading! Subscribe for free to receive new posts and support my work.

The evidence is already mounting. Enterprise teams are struggling with context management, code robustness, and over-reliance on AI outputs. These aren’t edge cases. These are fundamental challenges that happen when you don’t know how to properly direct an AI system.

But here’s what most people miss: the quality problems aren’t just about bad code generation. They’re about a breakdown in the feedback loop that normally catches problems.

When you write code yourself, you understand every line. You know why you made each decision. You can spot when something feels wrong. When an AI generates code for you, that intuitive understanding disappears. You’re reviewing someone else’s work, except that “someone” doesn’t think like a human and doesn’t make human mistakes.

I see this constantly in the work we do at Voxdez. Teams will show us AI-generated code that looks perfectly reasonable on the surface but has subtle architectural problems that will cause issues months later. The AI optimized for the immediate requirement without understanding the broader system constraints.

The skills needed to catch these problems are different from coding skills. You need to be able to specify requirements precisely. You need to understand how to validate AI outputs systematically. You need to know when to trust the AI and when to second-guess it. Most developers are learning these skills on the fly, using production systems as their training ground.

This creates what I think of as the Quality Valley, a period where overall system reliability actually gets worse before it gets better. Teams get the productivity boost from AI tools, but they haven’t yet developed the discipline to use them safely.

The valley is deeper than most organizations realize. Developers are managing multiple AI agents in parallel, which multiplies the coordination complexity. Each agent is generating code based on its local understanding, but nobody has a clear picture of how it all fits together. The integration points become sources of subtle bugs that are hard to trace back to their origins.

What makes this worse is that the Quality Valley is largely invisible to management. AI tools make developers feel more productive because they’re shipping features faster. The quality problems show up later, often in production, and they’re hard to attribute directly to AI tool usage. A subtle context-handling bug doesn’t announce itself as “this was caused by poorly directed AI agents.”

Some teams are navigating this better than others. The ones that succeed are treating AI adoption like any other engineering discipline. They’re developing systematic approaches to prompt engineering. They’re building validation frameworks for AI outputs. They’re combining AI assistance with proper human oversight. But these teams are the exception.

The default path, which is what most teams are following, is to start using AI tools because they boost immediate productivity, without investing in the infrastructure to use them safely. This works fine for simple tasks but breaks down as the systems get more complex.

Here’s what’s particularly insidious: the Quality Valley gets deeper the more successful your initial AI adoption appears. Teams that see big productivity gains early tend to push harder on AI usage before they’ve developed the competence to handle the complexity. They end up with systems that are increasingly dependent on AI-generated code that nobody fully understands.

The way out isn’t to avoid AI tools, that ship has sailed. It’s to recognize that effective AI direction is a distinct engineering skill that needs to be developed systematically. This means treating the transition like any other major technology adoption, with proper training, gradual rollouts, and robust validation processes.

But most organizations won’t do this. They’ll continue using AI tools in an ad-hoc way until they hit a quality crisis that forces them to get serious about governance. The question isn’t whether this will happen, it’s whether you’ll be ready when it does.

The Quality Valley is real. The only choice is whether you’ll navigate it deliberately or fall into it by accident.


References