Software delivery will continue to be non-linear
What I think is going on with AI is that what software managers really want is to be able to turn a dial marked "Quality" down and see a gauge marked "Speed" go up. AI agents seem like they could give managers that dial, so they've gotten very excited.
What I think is going on with AI is that what software managers really want is to be able to turn a dial marked "Quality" down and see a gauge marked "Speed" go up. AI agents seem like they could give managers that dial, so they've gotten very excited. I do think that AI offers something closer to that dial than managers have ever had before, but that it still doesn't solve the problem with building that dial, which is that software projects are nonlinear systems with a tendency to collapse into pathological (non-software producing) states.
In other words: The so-called "iron triangle" of price/quality/speed doesn't really exist in software the way people want it to, because fiddling with quality doesn't predictably affect speed or price. Decreasing quality a little bit might increase speed by a little bit, or it might decrease speed by a lot. Or it might have one effect for a little while and a different effect if you stay at that spot for a while.
AI presents some improvements over the traditional methods of going "faster but worse." Because they don't have memory or care about social status it's theoretically possible to get them to turn quality back up again later after you get them to turn it down. You can also always switch back to more human oversight. They're also pretty good at refactoring and rewriting, so they should theoretically make it easier to dig yourself out of the big hole of "we can't change this one thing without changing five other things," even if you've already fallen into it.
So maybe we're about to enter a glorious new age where the "iron triangle" of price/speed/triangle isn't mostly a nasty lie that leads well-meaning managers down into dark forests of one-off customizations and inappropriate architectural choices. I'm skeptical about the odds here but I do think it's possible.
If that happens I think it'll basically be because some combinations of the agents and their harnesses gets actually good at refactoring and architecture even without skilled human control, and they'll either clean up after themselves as they go along, or when you notice that it's getting harder to add features to your codebase you can tell it, "Hey could we tidy up around here" and they'll just do it. And then everyone will get the "slow is smooth and smooth is fast" benefits of doing an appropriate amount of maintenance on their codebases, not just the handful of shops that have invested heavily in that skill tree.
What I think is more likely though is that agents will make software delivery more chaotic and more non-linear, and as an industry we'll largely use them to invent new and interesting ways to suddenly or stochastically fail to deliver software. Obviously I am biased in favor of this outcome because I think that non-linear systems and their failure modes are very interesting, though I'll admit it is generally worse for society than the other outcome.
On that note: If you are having surprising problems because of AI adoption and how it's changing how your company writes code I would really like to hear about them, I keep hearing about this in general terms but haven't had as many in-depth conversations about it as I want. Send me an e-mail at nat@ this website.