Nouveau Shamanic Programming
ChatGPT won't replace programmers, because programmers aren't technicians, they're shamans.
Hey there, newsletters. I'm Nat Bennettâ programmer, summonerâ and you're reading Simpler Machines, a newsletter about how and why to make software.
I've got a couple of new things before we get into today's post.
One, I've turned on a couple of Ghost features â feedback and comments. So if you like this post you can hit a button at the bottom of it to say so. And if you have something to say about it, you can log onto the website and write it.
Two, I've updated the Vim guide from a few weeks ago. Added a config from Samir Talwar of Monospaced Monologues, who very kindly wrote on Mastodon:
I found your post very helpful and it was great to know you don't have to set a lot of settings to get something usable any more. I felt brave enough to delete stuff that's been in my config since⊠2008?
Let me know if you too have updated your Vim config with help from that post. Or if you have anything Vim-related that you're stuck on. I've got another round of Vim-improvement on my to-do list soon, and I'll probably write about it here.
There's a bit in The Unbearable Weight of Massive Talent (great film, btw) where Nick Cage refers to his "nouveau shamanic talent." This is how he describes his acting style in real life, tooâ he first used the phrase while promoting, of all things, Ghost Rider.
Say youâre playing a demon biker with an ancient spirit. What power objects could you find that might trick your imagination? Would you find an antique from an ancient pyramid? Maybe a little sarcophagus thatâs a greenish color and looks like King Tut? Would you sew that into your jacket and know that itâs right next to you when the director says âaction?â Could you open yourself to that power?â
"Could you open yourself to that power?"
I've been thinking a lot this week about programming. I've been doing a lot of it. I'm working on a project management tool in Phoenix, codenamed "Zero," and I've been getting deep into the coding zone. Zero has some moderately sophisticated "synchronize state across browser sessions" features. Getting that behavior right, and understanding what's happening when it doesn't work the way expect, has required some deep communion with the machine.
I've been thinking, too, about large language models. You knowâ ChatGPT, Bing, Bard, and all that. There's been a lot of news lately, and I've been chewing on it.
I've seen a bunch of takes recently about, basically, what's going going to happen to the world as the marginal cost of programming drops to zero. Usually there's an example of some working code the author has writtenâ despite never having written code before! There's often a note of glee at the idea that those darn programmers have finally automated themselves out of a job.
What I've seen written by programmers has been more restrained. Some of them are excited about the potential for tools based on these models to improve programmer productivityâ comparing them to existing editor tools, or to Amazon Web Services. Some of them are concerned about the damage people are going to do with these tools before we figure out that they don't work very well.
No working software person I know seriously thinks that ChatGPT and its cousins are going toâ canâ replace human programmers.
I'm a little suspicious of that opinion, since it fits neatly into "a man can't understand something that his paycheck depends on him not understanding." But I basically share it. So I've been thinking aboutâ why?
The usual answer is that their output is too buggy and too mysterious â that they don't actually save you any time, because any time they save you typing you pay for in debugging. I don't think this is a good reason.
Another answer is that whatever demand for programmers tools like ChatGPT replace will pay itself back in demand for more programming. This is a better answer but it's also kind of a cheat. Even if it's correct it's possible that this would fundamentally change programming into an activity that's not really recognizable by coders working today.
The real reason that ChatGPT and its cousins can't replace human programmers is this:
Programmers aren't technicians. Programmers are shamans.
It's not that I don't think the tools are buggy or that their output is currently pretty bad. They are. I stopped using Github Copilot last week. I had been playing around with itâ I like tools that make me faster at writing code!â but ultimately its output is too good at being plausible without being correct. I tricked myself with it a few times, wasted way more time than it had possibly saved me. I finally shut it off and cancelled the subscription after it appeared to have wrapped a bit of config in an if statement for me, but had actually generated a subtly different and not-working config inside the if statement, and I burned like 2 hours trying to figure out what was going on.
So that's why I'm not using them now but I still think these tools are going to become part of my work at some point. Maybe even pretty soon. The interface will improve, I'll get better at using them, they'll learn more about the specific languages that I'm using.
I'm more convinced by the "more programmer creates more demand for more programmers" argument but it's still insufficient. We've been through a few rounds of this kind of thingâ compilers didn't replace programmers, higher level languages only increased the number programmers, reusable APIs and frameworks decrease the amount of repetetive code programmers write but, again, just increase the demand for application development, no code systems haven't replaced programmers, and so on. This is the usual way that automation worksâ it's actually rare-to-unheard of for automation to completely replace human labor.
But automation does sometimes transform human labor unrecognizably. And systems like ChatGPT can in theory satisfy infinite demand for programming. So maybe it completely wipes out programming as a profession while creating an entirely new field in something I can't expect now. That's possible.
But, like I said, I don't believe that's what's going to happen.
Because programmers are shamans. And ChatGPT isn't a shaman.
People who don't programâ and plenty of people who doâ sometimes think that the real work in making a program is in deciding what it should do. "Gathering and specifying requirements." The actual act of programming is mere technical work. Fancy typing. People with this view always see the replacement of human programmers just around the corner. Sooner or later, surely, some technology will come along that directly connects the requirements specifier and the computer, and they'll no longer need that pesky human being in the middle.
This mistakes the basic nature of programming. Programming is something much weirder than that.
Because computers are alien. They "think" in a very different way from most humans. Input in: output out. Clear. Regular. They don't have any ability to understand what you meant. They only do exactly what they're told. Some humans can think this way a little bit, it's more natural for some of us than for others, but it's still weird.
Humans have the ability to understand symbols: A thing that means another thing, beyond merely corresponding to it, a relationship in a table. Someday, maybe, we'll be able to build systems out of computers that also have this capability, but large language models fundamentally don't have it. They're very good at translation, but they can't do anything that requires internal coherence between concepts.
(Ironically, this makes them really bad at math. ChatGPT has gotten a bit better at this, because now it can recognize a math-shaped problem and route to a specialized subsystem, but in the early days it basically couldn't do addition, and it still struggles with problems in this category. The answer to a mathematical question doesn't change depending on what text came before or after it.)
Programming, basically, requires letting an alien spirit into your mind. You have to pretend to be a computer. You're a translator from the human world of meaning to the computer world of functions. You channel the computer-mind and make it accessible to the human-mind.
This is why pair programming works, by the way. You're sharing the load with a co-shaman. Test-driven development, too, is basically a circle of salt. You reduce the difficulty of containing and comprehending the alien entitity, by capturing parts of it inside of tests. And this is why you should never interrupt a solo programmer: You'll disrupt the summoning.
So maybe large language models will change the way I write code someday soonâ maybe they'll help me write a lot more code a lot faster. Maybe I'll spend less time communing with the particular spirit of the JVM or the BEAM, and more communing with ChatGPT itself.
But the world still need shamans.