On the last Apropos, we welcomed Christoph Neumann to talk about his new role as the Clojure Developer Evangelist at Nubank. It’s very exciting that the role is in such great hands.
Our next guest is Peter Strömberg. Peter is known as PEZ online. He is the creator of Calva, the Clojure plugin for VS Code. He’s going to demo some REPL-Driven Development with Calva.
AI, Lisp, and Programming
I have a Masters of Science in Computer Science, with a specialty in Artificial Intelligence. With the way AI salaries are these days, you’d think I pull seven figures. Alas, my degree is from 2008. At that time, people wondered why I wanted to go into a field that had been in a sorry state since the 1980s. Wouldn’t I enjoy something more lucrative like security or databases?
But I liked the project of AI. AI, to me, was an exploration of our own intelligence—how we thought, solved problems, and perceived the world—to better understand how we can get a machine to do it. It was a kind of reverse-engineering of the human mind. By building our own minds, we could understand ourselves.
It’s clear that Lisp and AI have been linked since very early in the two fields’ existences. John McCarthy, the inventor of Lisp, also coined and defined the term artificial intelligence. Lisp was traditionally closely associated with the study of AI. Lisp has been a generator of programming language ideas that might seem normal now, but were considered weird or costly at the time. Here’s a partial list:
Garbage collection
Structured conditionals
First-class and higher-order functions
Lexical scoping
While it’s clear that Lisp has been influential on programming languages (even after being ridiculed for the same features languages borrow), what is not so clear is how much AI has been an influence on programming practice. Until recently, it was kind of a joke that AI had been around since the 1950s but hadn’t produced any real results. The other side of the joke is that once something works, it’s just considered programming and not artificial intelligence.
Artificial intelligence has produced so many riches in the programming world. Here is a partial list:
Compilers (which used to be called “automatic programming”)
Tree search
Hash tables
Constraint satisfaction
Rules engines
Priority queues
Let me put it directly: Seeking to understand human thought has been fruitful for software engineering. My interest in Lisp is interlinked with my interest in AI. AI has always been synonymous with “powerful programming techniques.” And I have always seen myself as part of that thread, however small my contribution might be.
The link between AI, Lisp, and programming was so strong 30 years ago that Peter Norvig started the preface of Paradigms of Artificial Intelligence Programming with these words:
This book is concerned with three related topics: the field of artificial intelligence, or AI; the skill of computer programming; and the programming language Common Lisp. source
In 2008, when I graduated, Google could barely categorize images. Identifying a cat in a photo was still considered a hard problem. Many attempts had been made, but none could get close to human-level. In 2014, I heard about a breakthrough called “deep learning”. It was using the scale of the internet and the vast parallelism of GPUs to make huge neural networks trained on millions of images to break accuracy records. It was working. And it was completely uninteresting to me.
Okay, not really completely uninteresting. It tickled my interest in building new things. I could see how being able to identify cats (or other objects) reliably could be useful. But I saw in this nothing of the project for understanding ourselves. Instead, it was much like what happened at Intel.
Nobody really likes the Intel architecture. It’s not that great. But once Intel got a slight lead in marketshare, it could ride Moore’s Law. Instead of looking for a better architecture, invest your time instead in scaling the transistor down and scaling the number of transistors up. Even the worst architecture will get faster. And Intel can cement their lead by investing in better manufacturing processes. Their dominance wound up lasting decades. But computer architecture has languished relative to the growth of the demand for computing.
The same effect is at play in neural networks: Instead of investing in understanding of how thought works, just throw more processing and more training at bigger networks. With enough money to fund the project, your existing architectures, scaled up, will do better.
These are oversimplifications. There were undoubtedly many minor and some major architectural breakthroughs that helped Intel keep pace with Moore’s Law. Likewise, there have been similar architectural breakthroughs in neural networks, including convolutions and transformers. But the neural network strategy is dominated by scale—more training data, more neurons, more FLOPS.
My whole point is that my research into the history of the field of AI has somewhat inoculated me against the current hype. I don’t think AI will “replace all humans”. And I don’t think AGI (artificial general intelligence) is defined well enough to be a real goal. So where does that leave us? How is AI going to transform programming? Where will all of this end up?
Artificial intelligence has always been a significant part of the leading edge of programming. And its promise has always been far ahead of its ability. In the next few issues, I want to explore what this current hype wave of AI means for us. I don’t like where I see AI going. But I also want to give apply some optimism to it because I think a lot of the consequences are inevitable. The world is being changed, and we will have to live in that new world.
Really excited for this series!
Is it not a bit of a stretch to claim that compilers comes from the AI field? After all, FORTRAN predates the definition of LISP by a couple of years, and LISP did not get a complete compiler until 1962. In the beginning, people were hand-translating LISP code into FORTRAN to be able to run the programs (as somebody related, I think at the 50'th anniversary conference in 2009).