Alan Kay and his group at Xerox PARC invented the future we are living in now. The GUI, desktop publishing, networking, high-level programming, and more. How did he do it? He’s given the method away for a long time in bits and pieces. I’ve watched enough Alan Kay talks to stitch it together. And here it is!
Here’s a summary:
Draw a technology curve 30 years into the future.
Pick a point about ten years in the future.
Buy and fake the tech you will have in ten years.
Plan a five-year research horizon.
Experiment your way to that point.
Along the way, do solid engineering, build your own tools, and have taste.
Draw a technology curve 30 years into the future
Take any curve you’re interested in. It might be computing power.
So far out that whatever you think about out there need have no visible way of getting there. It's just something that would be cool to have and/or important to have. So one of the things I thought of was, in 1968, that it was inevitable that we were going to have laptop and tablet computers.
How did he know?
The big whammy for me came during a tour to U of Illinois where I saw a 1" square lump of glass and neon gas in which individual spots would light up on command—it was the first flat-panel display. I spent the rest of the conference calculating just when the silicon of the FLEX machine could be put on the back of the display. According to Gordon Moore's "Law", the answer seemed to be sometime in the late seventies or early eighties.
Another summary of the same incident:
Earlier that year I’d seen Donald Bitzer’s flat-screen display prototype (a 1”x1” square of 16x16 pixels), which had brought forth thoughts of putting the FLEX Machine’s transistors on the back of a notebook-sized display to make a “notebook computer”.
Buy and fake the tech you will have in 10 years
Because of technological improvement curves such as Moore’s Law, which describe the lowering of costs per transistor over time, you could do the math backwards, multiplying the cost by 2 for every 18 months you go back in time. Everything will be bigger and more expensive, but it will be powerful enough.
You just pay whatever the Moore's Law thing is going to save you later on. You run Moore's Law in reverse so you double the amount you're going to pay for every 18 months you're going to take this thing into the past and be able to use it. And if you do that, you can start planning a little machine that's going to be able to do that, and that machine was the Alto.
If you can’t buy it yet, fake it. Kay couldn’t buy a bigger flatscreen display, but he could fake it with a CRT designed to be the size and shape of a sheet of paper.
Plan a five-year research horizon
He considers five years to be the right amount of time to look forward psychologically.
If you take a three year thing and only fund it for two years, it takes ten years. And, if you fund it for three years, it's gonna take five years. But if you fund it for five years, you can usually do it in three. And the reason has to do with the kind of engineering you're actually willing to do early, before you start making commitments.
Again:
You want to have a five-year research horizon because you should never try a research process that's less than five years. Psychologically, it's actually important because you're really going to take the first three years of it. But if you set a three-year horizon, the engineers and scientists will do completely different things. If you give them five years, they'll do the things they should do the first couple of years, and they will save a couple of years because of those things they do first. If you don't give it to them, you've started a ten-year project because a two-year or three-year project can't fit exactly in unless you give it some room.
Experiment your way to that point
We just don't actually understand how to design good user interface yet. So the way you get good user interface is by doing experiments with hundreds of people, and that's exactly what we did at Xerox PARC. We didn't put out stuff that hadn't been tested.
We could try out zillions of experiments without doing any optimization. So we could go out and drink a few pitchers of beer at lunch and come back and do a dozen experiments in user interface and never have to write a line of code that was optimization. Because the Alto would just do it. So it was about 50 times as fast as a timesharing terminal back then.
And doing experiments without having to optimize was huge for us. Not just in the user interface, which is critical, where you're doing hundreds and hundreds of things and trying them out, but also just in programming languages and other areas.
But the other thing that was very, very useful was that if we were willing to optimize something on the Alto, then we were writing code like people were going to write ten years in the future. In other words, we could make an actual application that would be comparable to what was going to happen ten or so years in the future.
In other words, a computer 10 years in the future is fast enough that your 5-year-out experiments don’t need to be optimized. And if you do optimize, you can get to 10 years out. The purpose of buying hardware so far in the future is that it allows you to experiment very cheaply.
Along the way…
Solid engineering
I remember Butler, in his first few weeks at PARC, arguing as only he could that he was tired of bubble-gummed !@#$%^&* fragile research systems that could barely be demoed by their creators. He called for two general principles: that we should not make anything that was not engineered for 100 users, and we should all have to use our creations as our main computing systems (later called Living Lab). Naturally we fought him for a short while, thinking that the extra engineering would really slow things down, but we finally gave in to his brilliance and will. The scare of 100 users and having to use our own stuff got everyone to put a lot more thought early on before starting to crab together a demo. The result was almost miraculous. Many of the most important projects got to a stable, usable, and user-testable place a year or more earlier than our optimistic estimates.
While building your software to be robust might slow you down in the short run, in the long run, it allows you to build on things over time.
Build your own tools
The Xerox PARC group bootstrapped what we would consider a modern computer workflow. They had a GUI, they had debuggers, IDE features, WYSIWYG editors, and printers. Not only were they programming after pitchers of beer, they were writing memos with fancy fonts, printing them at one page/second on laser printers, and distributing them to the rest of the research group.
Imagine if they hadn’t been able to print memos. Would the research be done as well? Or imagine they didn’t have good development tools. Could they have written a word processor? The tools they built helped build the other tools. Synergies accrued.
There’s a balance that needs to be struck:
So an interesting thing about computing is there are first order theories and second order theories. Like a first order theory is you should never do your own operating system or your own programming language, right? Because that could be a black hole you'll never get out of. Years later you might have forgotten what you originally did that operating system for.
But the second order theory is if you are able, if you're skilled enough to do your own programming language or your own operating system, you should. Because now you're not working around vendors. Now you're not dealing with anybody's ideas except your own, and so forth.
The big problem with computing is we can't prove things the way we do in math. Almost nothing that can be proven in computing is interesting. Right? So we're forced to the much harder thing of having to implement and then debug what we implement and then we can assess how good an idea is.
Or put another way:
If you're going to be successful at programming, you have to be good at coping. Almost never do you have a chance to do anything from scratch. [It’s] somebody else's machine, somebody else's software you have to go on, somebody else's compiler, somebody else's blah blah. And, you know what? The better a coper you are, the less good a programming designer you are. Because the whole art of designing a programming language is in partly being physically ill in using some programming language that you decide is ugly.
The people I look for, and the rarities at Xerox PARC, was PARC was full of people who were awfully good at making things, but also were awfully good at being critical. PARC was a frightening place to visitors, because we argued all the time.
And the arguments were about why this is shit, and why that's shit, and why our own stuff is shit, everything is shit. And how can we get out of this shit?
Have taste
Alan Kay’s team worked on music, painting, animation, and literature during their time there. He wanted to work on important work and build on the treasures of civilization. He understood how important it was to surround yourself with inspiration:
It's about as ridiculous as this building. You think about it as a user interface, right? Come in the door, and there's a stairway, and there's no map. And where am I, where can I go? And Jesus, this looks like a dungeon.
And remember, computer-human interface is part of what you're supposed to learn about, and you can't do it in a dump like this. You have to have some sense of design around. And I'm not sure you can do software without having some sense of design around.
And:
I was asked in 1971 to choose the initial books for the Xerox PARC library, and my response was to take the PARC librarian over to the “Whole Earth Truck Store” in Menlo Park and purchase every one of the hundreds of books listed in the Whole Earth Catalog. I did this because the catalog proclaimed itself as “Access To Tools”, and its selection included many of the best books written about a wide variety of systems and ecological thinking on large scales, use of tools, ways to think about the human condition, the place of technologies—high and low—in human life, governance, ways to think about business, and much more. It was the cream of both the culture and the counterculture: a center for helping human beings think deeply about their situation. A great start to the library of a research center planning to change the world!
Sources
Conclusion
This is Alan Kay’s telling of a repeatable method. You buy your way into the future so you can run experiments faster. Once those few years you’ve bought arrive, you’ve already invented what you need to use it. And, what’s more, it’s not that expensive. Companies make billions of dollars. Spending a few million dollars on a small team of brilliant engineers to invent new things shouldn’t affect earnings, but the upside may be unimaginable. I believe this is the ultimate reason Kay has been so vocal about this method. He wants companies to recognize the opportunity.
I must say a word about my skepticism about this method. It’s not that I don’t believe it, it’s that I think it leaves out quite a lot. It is oversimplified. Kay would be the first to acknowledge what a lucky time it was to be in computer research in the 1960s. There was so much government money directed by competent engineers. That allowed multiple communities to flourish, all working toward computers as media for collaboration. It was something of a golden age. Kay laments the state of scientific funding in computing these days. I just wonder if a small team could do much without the larger ecosystem. And, likewise, with the rise of 6-figure programming jobs, universities have become software engineering trade schools. They don’t produce scientists reaching for a romantic future.
This method, as I’ve summarized it here, also does not include the vision, which Kay also says is important. The vision of the ARPA project was to help groups of people understand vastly more complex problems and work together to solve them. The computer was seen as a new medium with enormous potential for that. When inventing the future, the importance of a bold, romantic vision cannot be overstated. You need to know what you’re working toward, otherwise your work devolves into incremental improvements on the status quo.
I’ll leave you with this quote:
A great vision is not a set of goals. A vision has to be nonspecific enough. It has to be romantic, but it has to be something that can be filled in by the people who hear the vision. And Licklider’s, as I mentioned, was “Computers are destined to become interactive intellectual amplifiers for everyone universally networked worldwide.”
He tied that to a magnet, and he hid it over the horizon, and it attracted hundreds of iron particles from different places. They all pointed to North. They didn't know what North was, and neither did Licklider, but they all got to North.
So the number one principle here is the goodness of the results correlates most strongly with the goodness of the funders.
Two things really connect with me in this:
> Because the whole art of designing a programming language is in partly being physically ill in using some programming language that you decide is ugly.
and
> universities have become software engineering trade schools. They don’t produce scientists reaching for a romantic future.
The former because I've been working with programming language design and compilers since my university days -- and I've always had really strong opinions on programming languages (and usually very vocal about them!)... which is why I've ended up using Clojure and why I'm finally happy with a programming language in daily use!
The latter because I've often lamented the changes in how (and even if) Comp Sci is taught for its ability to teach people both important underpinnings and also the pure problem-solving aspect of it as a way of thinking.
Really good article -- and fascinating to read the Kay quotes about how things got designed and built!
This is a good analysis that is applicable to other industries besides IT. Think far enough out, but not too far and not too close.