Agentic AI didn't kill developer demand

Many predicted that agentic AI would reduce demand for developers. The logic seemed straightforward: if building software requires less effort, you need fewer people to build it. But that’s not what I’m seeing.

What happened instead is that building became cheaper, so people started building more. More products, more experiments, more internal tools, more side projects, more ideas that would have never made it past the “not worth the effort” threshold before. Cheaper building does not automatically reduce demand for developers. It can increase demand because it increases the amount of software people want to create. Citadel’s analysis makes an important related point: capability growth is not the same thing as instant adoption or replacement. Organizations move slowly. Trust is limited. Deployment is messy. (citadelsecurities.com)

But there’s another shift that matters just as much. Development now feels fundamentally better. Not just faster—better. Before, building meant constant context switching: write code, run tests, read docs, search errors, debug, try again, break flow, repeat. Now a lot of that collapses into one session, one stream, one loop of instructions, feedback, and iteration. That smoothness changes behavior.

I’ve noticed this in my own work. I start building new features or entire projects simply because I want to explore an idea. Some of those get used once. Some get abandoned after a few days. Some probably should not have existed at all. In the past, the higher development bar forced prioritization. You had to choose more carefully. You had to think harder about whether something was worth doing. The effort itself was a filter. Now that filter is much weaker.

You can get up and running with very little effort, and you can parallelize multiple projects at once. For someone who thrives on quick feedback loops, this is genuinely enjoyable. Working with AI coding agents puts me into flow state much more easily than traditional development ever did—less friction, less context switching, less resistance.

But this is where the problem starts. Fast building creates the illusion of progress. You can ship features faster and still make the product worse. You can generate code, flows, agents, tools, and experiments at high speed and mostly create bloat. More output does not mean more value. The Adaline post captures this well: speed is not about shipping more, it’s about learning faster. Shipping velocity is not the point. Learning velocity is. Are you getting closer to something users actually want? Are you clarifying what good looks like? Or are you just producing more because now you can? (labs.adaline.ai)

That distinction matters. Because if agentic AI makes software dramatically easier to produce, then we are going to produce a lot more software than deserves to exist. Some of it will be useful. Much of it will be noise. Some will be abandoned. Some will keep running anyway. And once it’s running, it becomes someone’s problem. It needs maintenance, monitoring, debugging, cleanup, migration, ownership.

I don’t think agentic AI is simply reducing the need for developers. I think it’s shifting where the value lies. The scarce skill is becoming less about writing code from scratch and more about judgment: what should exist, what should not, what is actually improving the product, what is just motion disguised as progress, what should be killed early, and what someone will have to maintain later.

Agentic AI removed much of the friction that used to force better decisions. That’s powerful for execution. It’s less helpful for restraint. And if we keep building faster than we can prioritize, evaluate, and clean up, then we may end up with the opposite of the original prediction: not fewer developers, but more demand for developers to manage the growing pile of AI-assisted software we keep creating.

A never-ending snowball.