There’s no good evidence that static typing improves software quality over dynamic typing. Code coverage is a ridiculous metric for evaluating a program’s tests. Design Patterns often make software worse. And we can’t measure software quality.
Yet I believe that a good static type system can give you a great benefit. Testing can improve your software quality. Design Patterns are useful to study. And we should strive to improve software quality.
Skill is the hidden variable that makes static typing hard to prove and makes testing metrics useless. All of these factors, and most of programming, require skill. You can’t just “use types”. You have to use them well. Otherwise, you can make things worse. The same goes for testing and software design advice.
Software design, architecture, testing, and use of types are all skill-driven tools. Without enough skill, using those tools won’t help. While those tools convey powerful advantages, we must consider the humans operating them.
I wonder if we could somehow control for skill, we would find a strong effect of using static types. For example, if we could have typed and untyped versions of the same language (to compare apples to apples), and people who were good at each (that is, could do advanced techniques), would you see a difference? If you did, perhaps it’s evidence that static truly is superior. But if you didn’t, what would you conclude? I would conclude that being skilled with your tools outweighs static vs dynamic differences.
That’s a very hard study to perform. But as a thought experiment, it opens up some possibilities. Can we link skill to business outcomes? In Accelerate by Nicole Forsgren, Jez Humble, and Gene Kim:
In order to measure organizational performance, survey respondents were asked to rate their organization’s relative performance across several dimensions: profitability, market share, and productivity. This is a scale that has been validated multiple times in prior research (Widener 2007).
I’m not an expert on the topic of organizational performance, but I trust the book. Please consider your own level of trust in it. But if organizational performance is a thing, perhaps, just like how the DORA metrics were found to be linked to organizational performance, we might find a link from skill to organizational performance.
However, you can’t measure skill so easily. Yes, you can test someone for some of the basic skills, like how to operate JUnit. But it’s like asking someone to identify the controls on a construction crane to test if they are good at construction. Naming and describing the SOLID principles, even applying them in an example codebase, could be tested. But whether they’re even good principles linked to software quality is unclear. Perhaps we must be content with subjective judgment, as we often see in software engineering interviews.
On the other hand, we’re not looking to measure skill directly. We want to find a variable a business can control that positively affects its outcome. Perhaps the variable we’re looking for is as simple as the number of hours dedicated to learning.
Could we link weekly time spent learning with organizational performance? Accelerate does mention training as important to preventing burnout. That’s important. But I can’t find any mention of it being statistically linked to performance. However, anecdotally, teams that I judged to be more skilled wrote better software. And whenever a team didn’t like a certain practice (like testing), it was because they weren’t good at it.
Would we find that it doesn’t even matter what skills they are learning (as long as they’re related to your work)? For example, might we learn that studying the SOLID principles, even if there is no evidence that the SOLID principles work, improves your software quality in the long run? Even reading fiction is linked with the ability to solve problems. I can imagine familiarity with industry practices, even if you choose not to use them, improves your skill overall, a bit like how reading Plato, even if you’re not an Idealist, gets you to think through some important ideas.
Learning that it didn’t matter what you learned would help managers. They wouldn’t have to know what to teach, just that something was being taught. We could stop debating which philosophy is correct and learn all of them. And we wouldn’t have to fight over which conferences counted toward your training budget. However, I would still suspect that the quality of the learning material mattered.
Likewise, I know many programmers who are skeptical of learning skills they don’t understand. For instance, one programmer challenged me to find a practical use for monoids, thinking I was into them just because they’re trendy. Monoids are super useful, probably more than monads. But if it doesn’t matter what you learn, throw them both on the backlog. You never know when that random bit of math you learned will solve your problem.
It would be nice to have a lever to pull to improve our software in a way that impacts the business. Is time spent learning per week linked to operational performance? I think it’s worth investigating. And, if it’s true that the learning content doesn’t matter, perhaps we could put some grueling fights to rest. Instead of debating static vs dynamic types, we should learn both.