8 Comments

I think it's far too quick to claim something like "type systems can't find interesting errors." The example Rich Hickey gives about static types certainly is common enough: people fall into the trap of thinking in terms of type signatures like [a] -> [a] because they're the easiest things to express in the type systems people work with on a day to day basis. But a sufficiently advanced type system can express a lot more than that.

Conal Elliott's talk "Can Tensor Programming Be Liberated from the Fortran Data Paradigm" (https://youtu.be/oaIMMclGuog?si=1sz-bAFNGj6rEbNY) shows a correct-by-construction, purely functional approach to two classic "array" programming problems: parallel scan and FFT. The fundamental challenge, of course, is picking the right representation for the problem. The type system won't tell you what representation to use, but once you start to denote the problem, if used correctly, it can quickly identify inconsistencies in your representation and give you information about how general your solution is. This is how the resulting implementation of FFT is shown to be "portable" across different data structures.

Expand full comment

Conal Elliott is definitely doing great work, and, yes, he's leveraging the type system. He's always an inspiration. It's things like that that keep me wondering!

At the same time, when I watch his talks, I note two things:

1. He's using a bunch of mathematical understanding outside of what the language provides. Sometimes he manages to encode it in the type system, but it definitely wasn't obvious without the math background. Basically, the impressive part to me is not the types. It's the math skills.

2. Having watched a bunch of his talks, he's doing a lot of the same reasoning I do without a type system. I guess the types are in my head, as in a Church vs Curry Types (https://ericnormand.me/article/church-vs-curry-types). His slides are full of types, but he never invokes the typechecker. Even without it, we can understand it and check it ourselves.

I guess my point is: how much of the value of what he's doing is that Haskell is statically typed?

Expand full comment

1. Partisans of static types sometimes argue that careful use of a type system is a subset of formally proving your algorithm. If this is true, there may not be as big a distinction between "using a type system" and "math skills" as you imply. It seems reasonable to think that practice with static types may impart and help develop the math skills required to prove that algorithms like these work, which is what a typechecker can do when used well.

2. I don't know whether it's relevant that the typechecker runs during the talk. Just because it's not part of the public presentation of results doesn't mean that it's not part of the process of reasoning and verifying that the approach works.

Expand full comment

Yeah, totally agree. I remember a moment when I started to get the Haskell type checker on my side. Until then, it felt like an antagonistic relationship. After that, it felt like a helpful guide.

That said, once I learned to get it on my side, it became at once less useful and more useful. It was more useful because I could express more interesting things in it. But it was less useful because I could do the reasoning without it. For example, I can do a lot of the type discipline, higher-order reasoning, and modeling in Clojure where I don't have a type system.

Let me try to express my doubt (and it's just that--not an argument either way). Elliott is obviously at a higher skill level with Haskell than I am. And I'm sure he uses the type checker as part of his workflow because it's free. My doubt is that most of the interesting stuff is happening in his head--or even in the non-type system parts of the language.

One thing people like about Haskell is the Algebraic Data Type. Couldn't these exist in a dynamically typed language? In his talks, Elliott often reasons with function type signatures before implementing anything. By not invoking the type checker, it's just text on a slide. We can still reason about it as a type checker would.

But I could be wrong. It could be that he's able to write more complex models thanks to the type checker. Or he might be able to explore the edges of his abilities at type reasoning. Maybe he's continually refining his own reasoning thanks to the guidance of the type checker. People do that with other theorem proving systems, though often not in the same language as their production code.

Anyway, thanks for the provocative discussion. I always appreciate it.

Expand full comment

I think type systems can be very valuable in some domains: especially ones where the data is very constrained, in terms of domain an of shape -- pure algorithms on fixed domain data is a good example.

But most systems out there are _information_ systems, processing somewhat arbitrary and often dirty data from a variety of input sources and synthesizing one or more "clean" outputs (and sometimes also "dirty" outputs for pass-through data). Applying type systems to those is much harder and often constrains the solution in ways that are brittle -- and it is those, very adaptable and flexible, systems that often require many, small, localized changes that _should not_ affect the rest of the system (which type systems often require).

Expand full comment

We can fix programs but we cannot (completely) fix people. There will always be violations of the conventions and practices which are not strictly enforced. If an interface implementation is not checked by a compiler or a static type checker then someday somebody will mess up with parameter types or order. And it will be done by mistake or out of lack of knowledge. In any case we will pay the maintenance price. Either in time spent writing typed and fixing compile errors or by communication and fixing bugs.

This seems to be the question of how easy it is to train people in a team to get them on the same level of understanding the project conventions and design practices. This is also a complexity. It is just taken from the tooling and put into people.

Expand full comment

I'm reminded of Rich Hickey's talk on "simple vs easy" and his objective definition of complexity. Fixed-length arrays, clever tricks with font sizes and memory allocation, and dressing programs in straitjackets of types for the convenience of static analyzers may be easy in the short term, but they entangle the program's purpose with incidental complexity of implementation details. Experience shows the incidental complexity becomes stifling as the project grows or matures. To the novice, this stifling feels like good honest hard work. But sometimes the short term is all that matters.

I'm also reminded of my own transitions from machine (assembly) language through a succession of C, Turbo Pascal, C++, Java/C#, Clojure. In each case, the transition seemed to suddenly solve all my problems. Then I began to notice new problems. These "new problems" were not faults of the new language (at least not in comparison with the old) but rather were faults the new language enabled me to encounter by scooting me past all the faults of the old language so smoothly that I tackled more ambitious projects until I approached my limits with the new language.

Expand full comment

Have you tried Swift? What’s your take on that language?

Expand full comment