Navigating the AI Hype: Finding the Middle Ground in Productivity Gains

Like everyone else, I am watching a lot of commentary being posted on this platform relating to all things AI. Given my career and role, some aspects of the conversation are more interesting and applicable to me than others, and I think I am at the point where I need to start sharing my thoughts here. But I first must say, I have lived through a number of hype cycles over the years, but none has been more difficult to make sense of than the current AI discourse. It find it very difficult to separate the signal from the noise. But I am of the belief that there is probably a large middle ground between “this technology will render all human labor obsolete within 5 years” and “this is just another hype cycle, and this tech will find a niche like the others that have come before it.” I am hoping I can sort out, and occupy, that middle ground.

For starters, when it comes to spinning up something like an MVP, the productivity gains are insane. Any time you want to spin up some new cloud infrastructure, or some new n-tier app, there’s a lot of “boilerplate” code and work that goes into it. It isn’t the most interesting, and it is very often redundant and reflective of work you have done in the past. Also, it is not where the real value lies for your client or your project. But it has to be done, and it can be very time consuming. Being able to spin up a UI, web server, and a database in order to quickly iterate on a concept is effortless with the tools I’ve been using. This is not nothing. In fact, this is huge.

A lot of the demos you might be viewing on YouTube, or that people point to on this or that platform, are these sorts of projects. Don’t get me wrong, I enjoy being able to save this time. Absurd amounts of time. But getting the basic infrastructure in place for a very contrived and very narrow use case is nothing like trying to build and deploy something inside of a complex enterprise for a Fortune 1000 company. There are many reasons why, which could be the subject to a whole series of related posts.

But the point I’d like to make in this post is, based upon my observations so far, the productivity gains you may witness within particular use case should not be extrapolated across every possible scenario. The benefits will not be evenly distributed. Assuming you spin up an MVP/demo, and assuming everyone reviews and understands what is being deployed – a very big if when you are dealing with hundreds, or even thousands of files in that first commit – there is certainly a point of diminishing returns that occurs after those initial first steps.

For example, once your application requires additional features, complexity will grow. And correspondingly, your productivity gains will become inversely correlated to the complexity that arises within a real-world environment. This is something I have observed over and over again. And if there is a way to avoid this, I have not yet discovered it.

There’s still a garbage-in/garbage-out problem to a certain extent with prompts. Someone with no coding experience might be able to vibe code their way to a simple application. But very quickly, I find myself asking repeatedly, “how could someone without the experience actually get this right?”

Now, spinning up an MVP is probably 1,000 times faster than if I did it without an AI assistant. But, once the changes and complexity increases, those gains go to “merely” 100 times faster, to eventually what feels like maybe 10% faster. And there is an event horizon where it starts to feel like the tool is trending towards something that is more of a very efficient auto-complete. Reaching this point happens more quickly than you might imagine.

The gains are not nothing. And I will take them anywhere I can get them. But I think there needs to be a measured and realistic view of what these tools can do. Throughout the history of AI, as a technology that has taken decades to get to this point, the tech is largely conflated with magic. Much of the discourse I see today does not help with clearing up the fact that it is not magic, and that is actually a collection of well-defined solutions, to well-defined problems, with specific limitations. The tech is not an infinite productivity machine. I know, I know, it will improve over time. I’m certain it will. But the AI-assist tool does not *want* to build within Azure or AWS. It will only build what you ask for. (And then it will infer all sorts of things you didn’t ask for, which means you are still going to have to understand and review what you might deploy.) And it doesn’t know who your user is, and whether or not they want to use the application. And if you write the perfect prompt to get it to build what you want, you still have to learn whether or not the solution will actually solve a particular business problem and deliver value. If it were magic, I suppose we could automate the solutioning of those things.

I suspect over time we will all become orchestrators. Maybe we no longer need to memorize and be able to instantly recall the syntax of 8 different programming languages, in order to call ourselves polyglot software engineers, but we will instead have to focus on all of the nuance and complexity that will now become the factors that will determine the success of a solution. Having consultants who can do the hard work, and have been doing the hard work all along, will be the thing that makes or breaks you in the end.

Writing the code was never the hard part. It was time consuming, but “all of the other things” that are upstream of the engineering team’s Jira ticket has always been the hard part. If anything has changed, the AI-assisted coding tools just makes it that much more obvious. And if you want to leverage the value of these tools, you will certainly need to adopt them. But the real work, and real value, will be realized well-before the prompt is written.