I got drawn into the world of Alan Kay’s idea because I was going deep into functional programming. This was in 2007-2008. I had been programming in Lisp for years and was going through a radical transformation. Part of that process was understanding what I had been taught and how it worked—or whether it worked at all. (BTW, this recent talk is amazing at tracing the history of OOP).
Smalltalk exposed a simple syntax for writing software. I won’t go into too many details, but suffice it to say that it was designed for kids. Building things for children is a constraint which, when overcome, leads to a system that could be used by adults as well. The simple syntax and semantics opened the world of programming to new people. It was democratizing. And it was powerful. In 180 pages of code and 50 classes, the team built an OS, a word processor, a paint application, animation studio, and more. And many of the ideas, usually oversimplified, became the dominant way people program today: with classes and methods.
The thing I’m most afraid of is that we won’t see another leap forward in programming language design. As AI generates a greater portion of code, there won’t be any need for new ways to express programs. The AI doesn’t care how it’s expressed (or at least it doesn’t care about the same things humans do). Programming won’t get easier for humans.
One counterargument to this is that we have not seen any new ways of programming in about 50 years anyway. It’s not the AI’s fault. OOP seems to have been the last major “paradigm.” Maybe there isn’t any more to do? I admit that it’s possible, but it breaks my heart to think there isn’t something better yet to be invented. And it seems so myopic and arrogant to think we’ve figured it all out. I err on the side of hope and humility.
Another argument is that AI is the new paradigm. Instead of typing JavaScript, you type English and the AI types JavaScript. That doesn’t quite make sense, yet, because we do still read the generated code. We have to. AI doesn’t get it right all the time, so you have to review the code, even if you get the AI to generate all of it. I accept that as part of an AI workflow, but it doesn’t support the point. It’s not a new programming paradigm, as in FP, OOP, Procedural, or Logic. Those paradigms provide basic constructs that are composed and named, allowing you to create new constructs. AI is much more like a new way to edit code. The code, though, is the same.
It’s doubtful that a new language, tailored to AI’s strengths, would be economically viable. AI is trained currently on human-written programs. Humans have written billions of lines of JavaScript and the other popular languages. These are the ones that the AI “understands” the best, so these are the ones that it will prefer to generate and make fewer errors with. But at best, it will generate code like the average industry programmer, not like the ones who invented new paradigms. So we won’t see AI being able to invent its own new paradigm.
The one hope for this is something similar to self-play that we see in AlphaGo Zero. Perhaps an AI could be given the syntax of the language and be asked to solve increasingly hard problems. It would learn how to generate good code without human-created examples. But a human would probably have to invent the language the AI was learning. What would that language look like?
Unfortunately, I don’t think it would be very friendly for humans. AIs might as well generate machine code! I’d rather have the AI learn the human thing than humans have to learn the AI thing.
But perhaps more advanced AIs will learn new human-centered languages quickly. Perhaps you could feed it the docs and some example programs, and, along with a REPL, it could figure it out. And so there’s nothing to worry about. Except the new language has bigger challenges to face than it did before. Before, if a new language was 10x more productive than JavaScript (or the most popular language), it had a shot. But if you’re already 10x faster with the AI using JavaScript, your language will have to be 10x that combination, using an AI not trained on it.
So that’s my biggest fear: The use of AI to write code is going to stunt the development of new programming paradigms. In the worst case, this is the end of paradigms as we know them, at least in industry. I hope it doesn’t happen. I still hold out for finding new and better ways to express ideas. I just don’t expect AI to help with it.
If a paradigm is on the level of (say) functional programming, one might discuss how many paradigms that can exist before you have essentially covered the entire PL space. There are only so many ways you can fruitfully carve it up.
If the question is more like what it would take to make another PL as successfull as JavaScript (noting that any new paradigm obviously would need to ride along some PL demonstrating it), the problem is probably more about the scale it would take to make a new PL a success. New PLs arrive all the time (one such was even mentioned here in the comments) but achieving success at scale is getting increasingly hard. Big companies like Microsoft, Google and Apple can do it, since they control vast platforms on which they can (more or less) dictate the lingua franca; it is close to impossible for a research group to do today what the Smalltalk people were able to do back in the 80s, simply beacuse there is so much software around already, to achieve anything but niche success is virtually impossible.
What you wrote Eric made me think, AI writing in human programming languages is the same why humanoid robots exist. Of course, there are industrial robots which look nothing like that. Similarly, AI might make its own languages, at which point we'll lose any oversight.
Anyway, just like you, I'd be sad if we humans ceded programming languages entirely to the robots. On the flipside, it may turn into a hobby research, freed from having to stick with legacy, clunky languages. As I find with AI, any speculation and its opposite may become true at the same time.