Upcoming talk next week: Virtual and open to the public, I will speak about Domain Modeling. Be there!
People continue to like Grokking Simplicity continues to spread. If you want to help someone learn functional programming, please consider buying a copy or leaving a review.
Why is modeling so powerful?
One of the amazing things about working on something as big and complicated as a book is that it shoots off lots of little ideas. Way more ideas than could ever fit into a book. So if you ever wonder how I’m able to maintain a weekly newsletter and write a book at the same time, it’s because of this amazing benefit. And the converse is true, too. If I’m not writing my book regularly, it’s harder to write my newsletter. I simply have fewer ideas. The lesson, I guess, is to challenge yourself because the spillover will benefit other areas of your life.
One of the ideas that might not make it into the book is an explanation for my modeling is so powerful. I don’t know if I can take credit for this idea. I’m well past the point where I can remember the exact origins of all of the ideas about modeling swirling through my head. I’ve read too many books on this topic and talked to too many interesting people. Suffice it to say that I’m standing on the shoulders of giants.
So what does make modeling so powerful? Models allow us to use lots of the parts of our brains to work on a problem while working around our limitations. It breaks down to four things:
Reducing cognitive load
Using cognitive leverage
Externalization
Correspondence
Reducing cognitive load
We have limited mental capacity for working on problems. Models help us reduce the cognitive load of the problem so we can fit it within our limited capacity. The first way we do this is by abstraction—that is, eliminating unnecessary details. We map the problem into a space with less state. When understanding the motion of the planets, we can look only at their positions over time, eliminating their color and mythological significance, for instance. This makes the problem easier to work with simply because it is smaller. Scientific models tend to apply a minimalism—keeping only what is essential to the explanation.
We also make the problems easier to work with by identifying regularities. These regularities mean that the system has fewer rules to keep in mind. For instance, if we can eliminate corner cases and work only with total functions, there are fewer exceptions to remember. Likewise, we often can eliminate whole categories of things to remember by combining unlike things into the same idea. Newton did this with forces: Classical mechanics includes gravity as a force just like a push or a pull. That’s one less mental burden.
Cognitive leverage
When we look at behavior as complex as the motion of the planets, how might we try to understand it? Well, we have all of these ways of understanding built into our brains. We try to leverage the parts of our brains that can understand those systems, mostly by analogy.
For instance, we could try to anthropomorphize the planets and be able to use the part of our brain that easily understands people and their relationships. (That model might not be very successful. More on that later.)
We also break problems down analytically into hierarchies. Hierarchies allow us to view the situation at different levels of detail. By choosing the correct level of detail, we can ignore the other details, making it smaller, but also tapping into our innate ability to understand hierarchy.
We can take the problem domain and turn it into a formal system. A formal system is one in which the meaning has been stripped away, leaving only symbols that stand for an unknown meaning. For instance, formal logic might translate the idea “All humans are mortal” into the symbolic expression “∀ h ∈ Humans, mortal(h)”. This is a kind of abstraction that taps into our linguistic brain, the one we use for understanding complex grammar rules. The formality reveals regularities. For instance, “All dogs bark” can be translated into an expression with the same form as the human mortality expression: “∀ d ∈ Dogs, bark(d)”. Same structure, different symbols.
A huge breakthrough in science came when we started describing our models using mathematics. That allowed us to tap into all of the work done in the history of math—and it still somehow worked! More on that in the correspondence section.
We can also tap into our sense of aesthetics to think about a model. We judge a situation based on proportion, symmetry, and general elegance.
There are more parts we tap into regularly. Which ones do you use?
Externalization
So, we have this limited working memory for our brain. And we’ve got all of these processing resources available to apply to the problem. But how can we make use of those resources if the problem won’t fit? The answer is externalization.
Externalization taps into our environment to offload the memory into the world. So we write down the formal symbols, or draw a diagram, or build a physical model. These help us in two ways: We get to work on bigger problems. And we get to use those cognitive resources.
When we draw a diagram, we are using the paper as an extension of our working memory. We can draw a much bigger diagram than we could keep in our heads. But it also allows us to use our visual processing system (quite developed in humans) to understand it in a new way. Likewise, we can work through harder formal systems when we write down the symbols on paper. It gives us help both in size and in time. The problems are larger and we can work through the problems more carefully, even taking breaks, because the paper remembers where we were.
One of the keys to properly making use of externalization is to have rich and rapid feedback. Rich means it’s filled with the information you want, without too much noise. And rapid means as you make changes, you get information that you’re making the right changes. This information exchange is vital because you’re establishing an external memory that uses the low bandwidth channel of your senses to connect it up.
In recent years, we’ve also been able to externalize some of the mechanical processing of models to computers. When a model is externalized in a way that it can be elaborated by a computer, I’m calling it runnable. This obviously helps us work on bigger and more complex models than we would with our brains or even with our brains enhanced with paper and pencil.
What are your favorite ways to externalize your model?
Correspondence
All of these things help us understand the world—reducing cognitive load to make it fit in our brains, apply the cognitive processing resources we have, and expanding our memories with externalization. But these are useless if the model we’re building doesn’t correspond at all to the world. I’ve spent years asking myself the question of how any model can help us understand the world. It’s the same question I attempted to answer in The Wonders of Abstraction.
The key is that the model includes an abstraction (a mapping) into a space that allows for corresponding operations. (Note that this is a different meaning of abstraction from the one above). What we look for when we abstract is a mapping that preserves a homomorphism with a right inverse (a lossy isomorphism). That’s just a fancy way to say that after we abstract the real world, we can work on it in thought-space, then bring it back down to the real world again—and we get the same answer as if we had solved it directly in the real world.
For instance, we can know where a cannonball will land (to some error tolerance) by converting the situation to a physics problem. We calculate the position of the landing (find x when y=0). Then we actually shoot the cannon and take a real measurement of where it lands. We can see how far off we are. We’re calculating the accuracy of our model. The higher the accuracy, the higher the correspondence between our model and the real world situation. Science is founded on this kind of measurement of our model.
We want our models to correspond. The ones that correspond more are the ones that give us power. No matter how the other three factors fit in (cognitive load, leverage, and externalization) are applied, if the model doesn’t correspond, it doesn’t matter.
Cognitive load, leverage, and externalization are design aspects that we can play with to make the model more ergonomic. That’s very useful, but it’s just ergonomics. A broken ergonomic keyboard is useless, no matter how well it fits your hands. Likewise, without correspondence, the model remains useless.
Now, longtime followers of my newsletter might guess where I’m going to take this now. I talk about it so much, it must be on a bingo card somewhere: In software design, we’ve focused so much on the ergonomics of the code. We want good names for functions, small functions, testability, etc. These are all factors that help humans work with the code—and very important because we do have to work with code. But we’ve neglected to talk about the design choices we can make to improve the correspondence of our software to the domain it operates in. I’m trying to change that.
Rock on!
Eric
Simula (https://en.wikipedia.org/wiki/Simula) is credited as inventing OOP (even if Kay invented the term to describe it). The motivation for developing it was in fact "to create a language that made it possible for people to comprehend, describe, and communicate about
systems and to to analyse existing and proposed systems through computer based models" as I heard a couple of months back. Modelling used to be considered a rather important part of the OO approach but also something that seems to have been forgotten. My old PL professor is actually working on a book about this aspect (see https://oopm.org).
https://ceur-ws.org/Vol-3661/10-SHORT-PWinstanley-ISKOUK2023.pdf covers similar explanations for the use of upper ontologies