Loren Segal: Too Lazy to "Type"
gnuu.orgThe point of the article is that virtually all real Ruby programs could be transformed, after the fact, into statically typed programs.
The significant point missed is that I can start typing the program without having figured out what the types will be. After the fact you can do the translation, but postponing having to work out not immediately relevant details is huge for developers.
Except for short scripts I generally start programming by figuring out what the data will look like. In a strongly typed language I have a built in language for describing my data. Of course the data structures change as the program evolves but I find it massively helpful to have that description up front.
What's that quote? Something like:
"Show me your algorithm and I will remain puzzled, but show me your data structure and I will be enlightened."
Exactly.
To often, I think we're focused on the writing of code rather than the reading of code. Annotations sometimes seem repetitive, but repetitive can aid the readability.
Type inference might be a very nice compromise. Haskell allows you to annotate with type information, and that may provide more helpful errors, but usually it's optional. But people do it anyway, because type information tells a reader so much about the program.
If we just focus on writing, then clearly the programming language is meant only for unidirectional communication from a human to a computer. But if we focus on reading as well, then it becomes about humans communicating with other humans.
Type inference (Haskell, C#) can be handy. Optional typing (many Schemes/Lisps, e.g. Gambit-C, SBCL or Clojure) is also an option.
I see your point, but dynamic typing is not the only way to get (most of) these benefits, and both of the above show significant performance boosts.
It's really not that hyped up lately, but i really think that optional typing would be the best of both worlds in that matter. You can write your programs without thinking about types, and then you add types for safety/speed.
With a little bit of type propagation i think it would lead to quite a natural style of programming.
C#/CLR hints towards that from the other side with the dynamic type, but for syntaxic and cultural reasons, it won't be the same thing as a dynamic language allowing you to use types.
Also, Clojure type hinting is not at all optional typing, as it doesn't enforce types at all. It's purely for performance reasons, and i really think is the wrong way to go about things.
I once read lispers discussion on Qi language. One (actual) Qi user told us that he managed to prove that number 42 has type String.
This is the price of optional typings. It leave random holes in your program.
This paper presents an optional type system where type errors can only occur in the untyped sections:
http://homepages.inf.ed.ac.uk/wadler/papers/blame/blame-sche...
Depends which developers you're talking about. For the developers writing the code in question? Sure. For the developers trying to use that code? ...well, you're going to have to tell them about the types, or hope they can read your code-- the latter isn't really a win at all.
The most exciting thing about types is that they can completely prevent you from typing that program.
If you cannot figure proper types, you won't write proper program. You "fail early".
The author makes an interesting point about how we can overuse dynamic programming when statically-typed programming can pretty much get us where we want to go. I've wondered why we haven't seen a big contender to Ruby/Python/PHP/Perl that has good type inferencing, but maybe that's what Mirah hopes to do? As much as I love Ruby's runtime code manipulation, I'm not really certain what the heck it's good for in the areas where it's largely used, and why it can't be replaced by code that's verifiable. If a language like ruby lost its type dynamism tomorrow and had it replaced with a type inferencer and all the programs that needed trivial type coercion just worked, what exactly wouldn't continue to work? What is it that's so huge that requires the dynamic aspect, especially in web design?
This is something that has bothered me about my Ruby usage. A lot of the time, I feel like I'm writing "dynamic" code that really just needs to be pre-processed.
class Cool
[:ice, :soda, :snow].each do |item|
define_method item do
#some code
end
end
end
There isn't a real reason why that couldn't be pre-processed. I don't know a ton of Java, but the idea of compile-time code generation is appealing. Does anyone know more about this?Solving this problem in general is a major area of research. Look up "Partial Evaluation."
Yes! I second this. Partial evaluation is especially compelling stuff for improving efficiency. I'm glad other people on HN are into it.
That code is only run once per application lifetime. Without risking DRY, I don't know what you mean by 'pre-processed'. Do you care if it is run at the parse time instead of the Class Object definition time?
Pre-processed here likely means what you think it means. DRY is not really a concept meant to be applied to method definitions (IMO), but rather method bodies-- of course there are many ways to abuse the interpretation of DRY as well. As far as whether you should care about compile vs load time-- in Ruby you don't need to, but if the language had static typing, it would make a difference.
I think the reason is merely that compile-time metaprogramming hasn't been adopted in any mainstream programming language (yet).
Boo can do cool things like that though:
http://bamboo.github.com/2010/07/11/boo-meta-programming-fac...
> Another fairly interesting paper titled "Evaluating the dynamic behaviour of Python applications", shows that programs in dynamic languages with these runtime modification behaviours often stop modifying their behaviour after a certain amount of "load time".
I find that deeply reassuring, since that's the core idea behind Magpie, the language I'm working on. It's a dynamic language with a static type-checker. The idea is that it runs dynamically at load time so you can imperatively and dynamically build your types and then after that, it statically checks the results, then invokes main() and (presumably) runs in a more or less static fashion after that.
If these dynamic languages are usually used in a mostly-static manner, just without the typing and some added load-time flexibility, doesn't that just mean they operate at a higher level of abstraction? Isn't that a good thing?
Your definition of good may or may not include "do these higher level abstractions preclude, in theory and practice, well-known and cheap optimizations?"
Depends if you care about "code fast" or "develop fast".
It would be cool if a language could be both, by using this research to create a language that recognises the difference between load time and run time. The dynamic parts would be used to load the program, after which dynamic changes would be turned off, allowing function dispatch or whatever it is that is slow in a dynamic language to be sped up.
Basically, a (very!) enhanced preprocessor? That would definitely be useful.
For some definitions, yes, they are operating at a higher level of abstraction. However, abstraction isn't by itself a good thing.
As a heavy user of statically typed languages who is learning JS and Scala currently, can anyone with more experience with dynamically typed languages confirm or deny the point of the article. Scala, with it's type inferencing seems like it might be a happy medium between. Thoughts anyone?
Type inference is nice and goes a long way, but it's still fundamentally different from being able to modify types imperatively at load time. For example, in Ruby, it's trivial to swap out a method with a version that does some logging, but only on Tuesdays. Scala can't really do that because it presumes all types are locked down before any code (such as code to tell if today is Tuesday) has executed.
One of the points in the article is that this kind of behaviour modification is not done that often. Yea, it happens... but not as much as you'd think.
I don't think you have that right. I read it as that kind of behavior modification isn't done that often after load time. During load time, all sorts of shenanigans are going on (monkey patching, etc.)
Well, you're right about that distinction-- but from your comment "only on Tuesdays" I understood you were talking about behaviour modification some time after "load time". Also keep in mind the first paper cited in the article (profile-guided inference) actually points out that in most cases, "load time" can be inferred nearly statically-- as in, you would not need to run a test suite, you would only need to "load" the base code. This is a feasible task, in most cases.
For what it's worth, the main points of the article are fairly similar to our reasoning behind the Gosu language's design: type inference helps alleviate a lot (not all, but a lot) of the pain of having to write type annotations in languages like Java, and the addition of the ability to do some kinds of metaprogramming at load time (but not at runtime) gives you a lot of the benefits of the sorts of metaprogramming you traditionally do in a dynamic language like Ruby (again, not all, but a lot of the typical use cases).
The problem with java isn't static typing, checked exceptions, XML configuration, type erasure, lack of lambdas, lack of closures, EJB 1.0, 2.0, 3.0, J2EE, JSP, servlets, servlet containers, etc. The problem is ALL of those problems combined. It tries to be everything to everyone, and turns out to be just a pain in the ass to everyone.
Every year they redo the entire infrastructure trying to do something that doesn't suck. I gave up in 2003 and never looked back. Every time I see the code for a java web project, or talk to java devs about the issues they face I want to throw up in my mouth a little. If the language is so great why is half the code written in XML, if I wanted to write code in XML, I'd use XSL.
I don't use ruby because dynamic coding / dynamic typing is a panacea, I use ruby because most of the infrastructure is designed to do something useful out of the box with no configuration. If I want it to do something more, I configure.
I'd use F#/ASP.NET MVC/nhaml in a heartbeat over ruby/RoR if all the gems and things that make life easy were readily available. I really don't think ruby is that great of a language, but the gems, rails, etc are awesome enough that I'll put up with the things I don't like.
I'd prefer that people in the java mindset who think I'm too lazy to type keep thinking that and keep their mindset away from any API/gem/module/library I use. Please for the love of god keep thinking the reason I don't use java is that I'm too lazy to type.
JSRs, JCPs, reference implementations, TCKs, JDKs, J2EE, J2SE, J2ME, etc, they are wonderful, I'm so jealous. Unfortunately, I'm just too lazy to type to experience all the wonderful benefits of such technology, so you'll have to keep all that wonderful technology all to yourselves and away from my crappy dynamically programmed, eval'd, dynamically typed, slow & bloated code.
I wonder what the performance benefit of adding optional type hints to ruby or python would be.
Read the Starkiller dissertaion (for Python): http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.90....
The compiler gives enormous optimizations for certain (admittedly extremely specialized) numerically intensive programs / calculations. Of course, if the compiler was developed further (it might be, actually), you'd probably see optimizations for larger sets of programs.
'Too lazy' makes it sound negative. The fact that we can mine types from production runs can mean less work for us and a more fluid design style.
For what it's worth, I think run-time data mining has the potential to transform the industry.