This is intended for software developers that already know a good amount about these topics, who are interested in questioning and rethinking what they think they already know, for the purpose of expanding their mind and improving their craft. It assumes you already know what statically known types vs runtime known types are, what explicit vs implicit types are, and what test driven development is. It assumes you are familiar with the idea that there are many opinions and debates about the usefulness of these things, that some people like to divide languages into “scripting languages” and non-scripting languages, and that people have different preferences between them.
The debate over whether or not it is helpful enough to explicitly declare types (explicit static types), or even to have them be known implicitly at compile time (implicit static types) at all, vs types that are completely unknown and unspecified until runtime, is founded upon not deeply questioning what a type is. This becomes especially clear in the context of function/method parameters. Once you begin to reason about things like, what is a type, and why does the type matter, there is no debate. Stating or immediately indicating a variables type, has so many advantages, and not having this information makes the code so vague and unspecified, that strong types are to be preferred by default. I won’t say they are absolutely superior in every way, for every scenario, but they should be the default option we prefer.
Let’s say a function or method accepts a single parameter. What can that function do with the parameter or what can be analyzed about it? Well it depends. But what does it depend on? It depends on the type. Perhaps a method is being called on the parameter. This typically implies there is some assumption about the type. You cannot call that method if that type does not have that method. You normally can’t, or at least shouldn’t, use mathematical operators on a type that is not numeric in some way. Once any question marks are raised in the mind of the programmer looking at or using that code, none of them can be answered until you know something about the type of the parameter. If you want to call that function, you don’t know what is valid to pass, until you know what type it expects. If you pass the wrong type, your code is fundamentally wrong.
The fundamental problem with not wanting to have a compiler know the correct type and report type problems (or a similar type linter), is a lack of appreciation for the benefits of tight limitations and boundaries. Limitations and boundaries create freedom. Without gravity, we would not be able to walk on the earth. Sure it “confines” us to the ground, making it difficult to leave it, but this confinement creates freedom, the ability to go places. Until a tool knows what type something is, nothing is really known, and nothing can really happen. A compiler cannot properly compile anything without understanding something about the types. The runtime environment won’t know what to run without understanding something about the types.
Many argue that this is all unnecessary if you have automated tests. They say these will test the behavior, which is what you actually care about, and the types will be tested implicitly by this, or that as long as the behavior is tested to be correct, the types don’t really matter anyway. While that is all technically true, it fails to be a cost to benefits analysis. The type is the very core of what anything is in the code. Seeing the type, and automatically verifying it was used in a valid way, gives you something to work with, the same way that gravity gives us something to work with. Seeing the type of a parameter makes the code clearer. It places some limitations about what that thing is right at the point it is defined. It creates consistency on where to find that information. It prevents you from having to immediately look at the either the implementation or the test to begin to know anything about anything about that parameter.
How can a mind be creating (programming) intelligently if it doesn’t want to care about the most important thing or have any limitations that create a frame of reference to interpret things from?
We live in a world that worships freedom at the expense of freedom. We only focus on the negatives of limitations without seeing the positives. We don’t want there to be any restrictions on what we can do, and thereby have left ourselves almost unable to do anything of importance, or avoid anything of consequence. The endless possibilities leave us with decision fatigue and low willpower. The benefits of a few fundamental, default limitations are enormous.
Types are the building blocks, the core of the code universe. Of course a program is not correct just because it’s types are used correctly. But a program is definitely incorrect if the types are used in completely invalid ways.
Types are the limitation that gives you the most bang for your buck, the most benefit for the least amount of cost. I’m not saying other limitations are not worth your time and effort, but types give you a lot for a little, or at least they make a lot possible. For example, when a statically typed variable is used in a completely invalid way for that type, the compiler can tell you this. The compiler writer has the option of making this error message very user friendly and informative. Not all compilers will do this, but the type at least makes it possible. If you have to run all your tests and wait for that line of code to be exercised, and then try to interpret a runtime error that may not even be focused on the misuse of the type, this is not as rapid and helpful of feedback as you would have gotten from a user friendly compiler error.
If many people are convinced that the benefits of writing tests outweighs the costs, how much more so is this true of writing types? A test is a limitation and description of what is expected. If the test fails, the limitation is hit, you are not supposed to release until you get it passing. If a type is misused, a limitation is hit, and you can’t/shouldn’t compile and run the code until that is fixed. But you shouldn’t have to have a completely separate place (the test) where you place all your limitations. You want to marry your limitations to the code, the limitations are the code, and types do this beautifully. I’m not saying all we need is types, but they do successfully express limitations right inside the code itself, as part of the code.
How much farther can we take this? What empowering limitations lie beyond types? What other limitations should we be able to express right in the code that would make it more expressive and provide a way for even more automatic, compile time verification? These are the questions we should be asking ourselves. What other questions should we be asking ourselves? What ways have we missed to make code fundamentally easier to write and read accurately?