AI Psychosis

When I open my X (formerly Twitter) feed it reminds me of the pandemic days. The entire conversation is being dominated by a single topic, with the figures representing authority in lockstep with their message. I’m of course talking about how the real developers are no longer writing code. Oh, sorry, that’s old news. The real developers are no longer reading code. And how could they? They are committing 10,000 plus lines of code a day after all. Ain’t no one got time to read all that. Oh, and no one will have a job in six months. (Including these people I guess? Or maybe they are the only ones who will have a job?)

I have my own opinions about whether this AI hysteria is warranted but I’ve only been chatting with AI and never used any coding agents or the vaunted Ralph method, so it’s possible my opinions are off base. I can’t help but note though how these companies generously warning us about the future are spending millions purchasing software and hiring engineers all while they struggle to ship working products. Instead what I’d like to discuss is the argument for giving into the vibes and letting AI take the wheel.

The reasoning I’ve seen the acolytes push goes something like this: “No one knows how their code works already so it doesn’t matter if AI writes it or not”. I think what they are trying to say is high-level programming languages, such as JavaScript and Python, are already an abstraction of what code actually runs (machine code), so we might as well go one abstraction further and just say in English what we want to build and let the AI translate it to a high-level language (or even better, directly to machine code).

This argument does sound compelling. It’s just the natural evolution after all, and you can’t stop evolution! But the argument can be rephrased to “if you don’t know everything you might as well know nothing”. Sure, I don’t know exactly how my code is executed by the computer, but I know what will happen if I make a change in a specific function, and how many other changes in other functions will need to be made as a result. And for me that is as good as knowing everything.

It’s possible that some acolytes may view the argument a little differently. We install a lot of different packages whose code we never read, so what’s the difference between that and not reading code AI generates? I think the difference is pretty clear in that the packages have been used and tested by thousands of people while the code just generated by the AI agent has only been tested by the agent. Further, the packages typically handle a specific aspect of our software that we want to abstract away. We still know the inputs and expected outputs and how it works together with the rest of our application. So yes, they are right it doesn’t matter if we’ve read the source code. But there’s a difference between not writing, or reading, or remembering having written, a given function, and having no idea how any part of your application works.

If I had to guess why I keep seeing this argument it’s because the acolytes don’t know how to code and they view this AI revolution as finally leveling the playing field and need to believe its true. And of course the leaders are telling us the AI revolution is here because they desperately need to raise more money and sell their products. So I’m going to keep reading code.