Settings

Theme

A Bit of Heresy: Functional Languages are Overrated

benrady.com

49 points by kyleburton 16 years ago · 63 comments

Reader

devonrt 16 years ago

A really incoherent article that's wrong in a lot of ways. The title is misleading because the author's main beef is with the idea that functional languages are the "solution" to the concurrency "problem." He admits that he's tried to learn both Erlang and Haskell and given up because the languages are too hard, too complex and "absolutely full of academic shitheaddery" and points to the reference manuals of Haskell and OCaml as proof (edit to add: Erlang is the opposite of "academic shitheaddery". Totally born and bred for business!). I might come off as an asshole for saying this, but if attempting to learn and then giving up on Erlang and Haskell is your only experience with functional languages then you're not in a position to comment on it. Keep digging into Haskell until you run into a Monad Transformer stack a mile long or spend some time with Clojure.

The author then conflates Actor based concurrency with functional programming in general. Let me lay my bias on the table right now: I'm sick to death of hearing about Actors. Erlang put them on the map and I think Scala made them popular, but Scala missed the point. Erlang's actors are only one piece of its distributed-ness; there's a lot more to Erlang that lets it scale as well as it does: a VM tuned for a large number of green threads, a transparent network stack, the supervisor model etc. Scala has none of these. Not only that, but actors themselves only solve one tiny problem in the world of concurrent (really, for actors, distributed) computing. They do nothing to address shared state. Finally, neither of these languages is really, truly functional.

If the author has titled the post "Actors are Overrated" I might have agreed with it.

  • Scriptor 16 years ago

    Completely agree. The author mentions that he tried to learn Erlang and Haskell (we don't know how much he tried), uses the technical specs of Haskell and O'Caml, and then makes a side swat at Scala.

    It looks like at the beginning of this year he had decided to learn Haskell as a way to explore functional programming: http://www.benrady.com/2010/01/language-of-the-year-2010.htm....

    I simply don't think being unable to learn a language qualifies you to bash it. It'd be more fitting for him to post about the first point in whatever Haskell resource he is using where he became confused.

  • nostrademons 16 years ago

    I think a more accurate title may've been "Things I don't understand are overrated."

jerf 16 years ago

The error here is confusing the language for the paradigm. He all but concedes that functional programming is a good idea and you can do it in current languages. True. When you do that, you have a functional program. The "functionalness" or "objectness" or "logicness" of a program is a characteristic of a program, not the implementation language. When you do object-oriented C, it isn't a "hack", it really is object-oriented C. You just don't have language support, you're doing objects by hand, with the corresponding disadvantages (syntactic pain) and advantages (no privileged concept of objects means you can do your own thing, which can be useful; want 'prototypes'? just do it! arguably easier in C than C++). When you do functional Javascript, it really is functional programming, you just don't have the same language support. When you refuse to mutate values and treat a language's values as immutable, even though the language permits you to mutate them, you have immutable values; the essense of immutable values is that you can depend on them not being mutated, not that the language makes it syntactically impossible to express mutation. This has obvious advantages if you want to use an immutable style, but is not strictly necessary.

Proof: It all ends up running on assembler anyhow, which isn't functional or OO or logical or any other paradigm. All of those things are categories of programs, not intrinsic attributes of the environment.

The author isn't being anywhere near as contrary as I think he'd like....

  • cageface 16 years ago

    The title of his article is that functional languages are overrated and he specifically points out that functional techniques are available in other languages, so I don't think you can say he's confusing the two concepts.

    • CodeMage 16 years ago

      Yet he has a bias and it shows in his comments on Scala:

      And if you're working in a hybrid language, what assurances do you have, really, that your system is really thread safe? Indeed, the risk compentation effect might give you a false sense of security, giving you a system that actually had more concurrency problems that one written in a "dangerous" language.

      The phrase "hybrid language" in the original text is a hyperlink to Scala. By author's own admission, the languages he has tried were Erlang and Haskell. He hasn't mentioned either trying or not trying Scala, but the article seems to imply the latter.

      The problem with what he said is that he seems to be dismissing "hybrid" languages like Scala as equally overrated or worse than pure functional languages like Haskell. Just like Jerf pointed out, it's the paradigm that matters more than the language. A language like Scala is designed to make FP paradigm easy to do, without being a pure functional language. As far as I know, it's not designed to make your programs "thread safe". I believe that the author wouldn't have made his claim about "risk compensation" if he had actually studied Scala and tried it out.

      • cageface 16 years ago

        The problem with what he said is that he seems to be dismissing "hybrid" languages like Scala as equally overrated or worse than pure functional languages like Haskell

        His point is that concurrency is something you still have to consider carefully and design in from the ground up and that no language is a silver bullet for this essentially hard problem. I think he's also probably right to worry that people will think that because they're using Scala and actors, for example, that they have automatically created a correct, performant concurrent application.

        That doesn't mean though that languages like Scala don't give you better tools to deal with these issues than Java.

      • loup-vaillant 16 years ago

        > Just like Jerf pointed out, it's the paradigm that matters more than the language.

        Note however, that languages are extremely good at making us use their prefered paradigm.

yummyfajitas 16 years ago

"The languages are just too complex, too terse, and absolutely full of academic shitheaddery. I mean, seriously, what the hell."

The author links to the formal specification of Haskell and decides pattern matching is "academic shitheaddery"?

Is the formal specification of Java any better? http://java.sun.com/docs/books/jls/third_edition/html/gramma...

  • telemachos 16 years ago

    I was thinking of HTML 4 specs: clearly a formatting language that many people can use (with a reasonable degree of correctness), but the spec is not light reading. (The specs for HTML are no doubt far more comprehensible than Haskell's, but I would expect the formal specification of any language to be more painful to read than it is to work with the language. In my experience, that's the nature of the formality involved in a spec: in an effort to be maximally precise and detailed, they become less and less readable).

  • knieveltech 16 years ago

    Java (being rife with academic shitheadery itself) may not have been the best choice if you're trying to suggest the author doesn't have a point.

    • yummyfajitas 16 years ago

      I picked Java because a) I know where to find it, b) the author mentions Java positively and c) the C++ standard costs $18.

      Any formal language spec will look ugly and hard to read. That's just the nature of formal specs.

cageface 16 years ago

In this interview: http://www.infoq.com/interviews/armstrong-peyton-jones-erlan...

Simon Peyton-Jones, the father of Haskell himself, dismisses the idea that avoiding mutable state automatically buys you easy concurrency:

But it turned out to be very hard to turn that into actual wall clock speedups on processes, leaving aside all issues of robustness or that kind of stuff, because if you do that style of concurrency you get lots of very tiny fine-grained processes and you get no locality and you get very difficult scheduling problems. The overheads overwhelm the benefits,

There are some useful and novel ideas in the FP languages but they're no silver bullet. Whatever the conventional solution for concurrent programming turns out to be, it will have to be exploited in languages more accessible to the median programmer than Haskell.

  • loup-vaillant 16 years ago

    Beware: you say "concurrent", but SPJ was talking about "parallel". Not exactly the same.

    • cageface 16 years ago

      "Concurrent" is the word he uses repeatedly. What distinction do you make between "parallel" and "concurrent"?

      • loup-vaillant 16 years ago

        "Concurrent" is about observable behaviour. Like with a web server, serving many many requests concurrently. When SPJ is talking about concurrency, he is most likely talking about shared state with software transactional memory.

        "Parallel" is about optimizing an otherwise linear, or atomic program. Like map-reduce, which can be run on a single CPU or on a distributed cluster. When SPJ is talking about parallelism he is most likely talking about nested data parallelism.

        The distinction between the two can be made without ambiguity with the terms "task parallelism" (concurrency), and "data parallelism" (parallelism).

        • cageface 16 years ago

          When SPJ is talking about concurrency, he is most likely talking about shared state with software transactional memory.

          If you read the interview he is talking about the kind of implicit concurrency FP advocates often suggest you get for free with FP.

          I suppose Haskell initially wasn't a concurrent language at all. It was a purely functional language and we had the idea from the beginning that a purely functional language was a good substrate for doing concurrency on. But it turned out to be a lot harder to turn that into reality than I think we really expected because we thought "If you got e1+e2 then you can evaluate e1 at the same time as e2 and everything will be wonderful because they can't affect each other because it's pure."

          They had to add things like STM and explicit concurrency management because you don't get this for free just because you're doing FP.

Zak 16 years ago

I'll speak to the languages I know here.

Haskell is a research language. It has become usable for practical purposes, but at heart its still a research language. Haskell focuses on isolating side effects over practicality or ease of use. For certain classes of problems, that turns out to be the most practical thing. Want to be sure certain parts of your program don't access /dev/nuclear_missiles? Haskell is your language. Is your app mainly centered around a complex GUI? Maybe you should look elsewhere.

I notice no mentions of Clojure in the article aside from including it in a list of functional languages. Clojure is a practical language designed for getting stuff done. It offers the most comprehensive solution for managing shared state I've seen in any language and does its best to get out of your way.

  • nostrademons 16 years ago

    "Want to be sure certain parts of your program don't access /dev/nuclear_missiles?"

    Actually, if you want to make sure that parts of your program don't access /dev/nuclear_missiles, I think that E is your language. In Haskell, it's as easy as `unsafePerformIO $ hGetContents "/dev/nuclear_missiles"`, which at least warns you that it's unsafe but doesn't really give any assurances otherwise. And if you're in the IO monad anyway, you don't even need the unsafePerformIO.

Robin_Message 16 years ago

If you think shared state is hard to scale and you are arguing for message passing, then what's wrong with Erlang? In fact, you can't conflate Erlang, Haskell, O'Caml and Scala together as FP.

By being immutable by default, FP makes message passing simpler and in some cases forces you not to do shared state, so FP helps you do concurrency that way. Closures are old news and are in almost everything now anyway (I'd include Java anonymous inner classes, they just have a nasty syntax, although good for grouping methods, e.g. mouse events).

Other than immutability by default and closures, what makes a language functional anyway? Because if you take the languages you suggested together, that's all I see them having in common.

  • yummyfajitas 16 years ago

    ...what's wrong with Erlang?

    Strings:

        1> [98, 114, 111, 107, 101, 110] == "broken".
        true
    
    Records are not real records, just compile time labels placed on top of tuples. The syntax, seriously, three different line terminators?

    Python and Haskell both make great go-to languages, and you can solve a wide variety of problems in them. Erlang is the language you suffer with when you truly need massive concurrency and distribution.

    • cageface 16 years ago

      Haskell seems to have too many rough edges to make it nice for real-world work. The inconsistent error handling (errors vs maybe vs either), weak record syntax, and ruthlessly strict stance on mutability don't sound like things I want to wrestle with on a large scale.

      Scala, on the other hand, seems to strike more reasonable compromises on these points and has all the java stuff to draw on when you need it.

      • kwantam 16 years ago

        > inconsistent error handling

        I don't understand how offering several choices is synonymous with inconsistent. If you really really want "consistency", establish a policy for your code and stick with it. All three have distinct uses, and each is better for certain things than for others.

        Anyhow, there's nothing particularly special about the Maybe or Either monads; you should be able to easily implement either of those yourself, and calling them a feature of the language---beyond the fact that they happen to be in the standard library---is somewhat specious.

        • cageface 16 years ago

          Error handling is one thing you want to have handled consistently across all code, including third party libraries. I may be able to enforce a convention in my own code but the typical app uses a dozen or more libraries.

          http://stackoverflow.com/questions/3077866/large-scale-desig...

          • Robin_Message 16 years ago

            Consistent error handling is needed if you are returning 0 for success and -1 for failure. Or was it 1 for success and 0 for failure? But there's nothing wrong with a type signature that precisely describes what you can get back. Only argument I'd have with Either is it's not obvious which is the error (although left is the convention and baked into the monad), although in practice you'll be returning "Either ParseError ParseTree" which makes it rather obvious. Also, do-notation over the Maybe monad is just perfect for simple error handling, but sometimes you need to return more than None. Happily, you can switch to Either without changing a great deal. But you can't mix them easily, and that is a suckful thing about monads indeed.

            • cageface 16 years ago

              So what do I do if I want to write a function that takes another function as a parameter, and some of the possible functions use Maybe and some use Either? What would this code look like?

              • yummyfajitas 16 years ago

                f: (a -> m b) -> ... -> m Result

                Maybe and Either are both monads and by convention Left errCode is the fail method of the Either monad. This will work exactly as you think it should. It will also work with, e.g., io actions that might fail.

              • naradaellis 16 years ago

                If I understand you correctly, such a possibility can never arise in Haskell's type system. A possible function would be (a -> Maybe b) -> a -> b, and this won't accept anything that returns an Either.

    • Robin_Message 16 years ago

      Okay, when I first saw that I thought it looked bad, like PHP or something. But then I realised strings are just [char], and that's not so bad. Compiling away the labels sounds like a good optimisation, especially given the wire format and evolvability. Three line terminators? Ugh, but no worse than "public static void".

      If Erlang is suffering, why has no-one written a better front end compiler with saner syntax or a static type system?

  • kenjackson 16 years ago

    The ironic thing is that in the HPC space, where they've done large scale message passing for decades -- the holy grail has always been large scale shared memory!

    Now admittedly the programming models in HPC were ugly (MPI), but nevertheless the lack of shared state and the use of message passing certainly didn't make it easy to write high performance parallel apps.

    I think the problem is fundamentally hard. And when a problem is fundamentally hard the solution to it often is "that other thing we haven't really tried yet". Until you've really tried it.

    • nostrademons 16 years ago

      A lot could just be that the requirements are very different between HPC and typical message-passing business apps. In HPC, you want to squeeze every ounce of performance out of the cluster, by definition. If you could get rid of the message-passing overhead, that would be a huge speedup.

      For most business processes, you don't care about performance all that much, you just want things to be easy to change. Message-passing works fairly well for that: it's easy to understand, composable, and let's you swap out one component for another as long as the interfaces are compatible.

      BTW, message-passing isn't exactly untried. It's the basis for the service-oriented architectures that underlie Google, Amazon, FaceBook, and many other large businesses. It works very well for that problem domain.

      • kenjackson 16 years ago

        The message passing issue with HPC codes wasn't perf overhead of the messages. Rather it was that developing message passing applications was very complicated. But message passing apps, when written correctly gave very good performance.

        With that said, you're right. If you don't care about performance, or if you have very large grains of computation then message passing is relatively easy (although so is just about any model with those requirements). The question is what about when you actually do care about performance and your grains aren't so large that doing communication half-way across the planet isn't acceptable? When I'm trying to get 60FPS in my physics engine, I probably don't want to use a web service interface.

    • gruseom 16 years ago

      That's very interesting. Can you elaborate? I assume your point is that the HPC people were forced into message-passing by the lack of large-scale shared memory, but would have preferred the latter because they found it easier to program for? And that this somewhat contradicts our fashionable ideas about how to write concurrent programs?

      Also, is their situation analogous to multicore today? i.e. does many GB of RAM shared by many cores count as "large-scale shared memory" (or does it not, e.g. because of cache effects)?

      • kenjackson 16 years ago

        Pretty much what you describe is my point. That is why the common paradigm in HPC was that you used OpenMP on a node (shared memory model) and MPI across nodes. It's not difficult to do MPI on a node, but nobody wanted to do that. And there were several attempts to put OpenMP on clusters. The most recent one I know of being Intel (http://cache-www.intel.com/cd/00/00/28/58/285865_285865.pdf).

        I think it is still a little analogous. The main takeaway is that message passing doesn't make things easy. I've spent many of days debugging message passing applications. You often trade-in one type of problem for another. If anyone is interested in more detail, I can go into an example or two.

        • gruseom 16 years ago

          Yes, please go into an example or two.

          • kenjackson 16 years ago

            So here's a typical problem that I'd have in an HPC application. I'd have some space, represented by some 3D structure (maybe an array, or even an object-based particle system). I need to do some computation over this space -- often using some type of stencil, so in order to compute the value at <x,y,z> I need values of coordinates some distance from x,y,z.

            The part that often ends up being tricky is the fact that I need to send data from processor A to processor B. And I want to send as little data as possible. So one of the first sources of bugs is that when I do my gather-scatter I make a mistake mapping a value to a coordinate. In shared-memory you never have to do this mapping back and forth, so its not an issue.

            Next issue is related to the fact that I don't want to ever block waiting for data. There are a variety of models for handling this. I can do a non-blocking receive, and do some work waiting for the data to arrive. This is often another source bugs as people will often do work that depends on the new data, but they chug along without it. Add the new data when they get it, and alas their computation is already hosed.

            And the last common error in this case is handing the data off to the wrong object (or processor) or being confused as to which data you're receiving at any given point in time.

            Now all of these can be handled by simply being careful, and using some good programming practices. But they are just simple, if not grossly naive, examples of issues you have with traditional message passing that don't exist in shared memory.

            • gruseom 16 years ago

              Interesting; thanks.

              What you're describing sounds to me like the complexity of ferrying data around and scheduling computations is being offloaded to the app programmer. Presumably the intent behind things like OpenMP on clusters is to take care of all that behind the scenes and let the user pretend that it's all shared and program accordingly. Is that correct? If so, how far would you say such distributed infrastructure has gotten to date? Is it usable for real work, or do people end up having to learn so many limitations and workarounds that they're no better off than programming against the lower-level model in the first place?

              Another question: even when there is shared memory you still have to coordinate the various processes that are operating concurrently on it so they don't clobber each other, and that, as everyone knows, is complicated too. So there is a tradeoff here. It sounds like your point is that given a choice, the HPC community would rather program against shared memory using traditional concurrency mechanisms (threads, locks, etc.) than deal with the complexities of the alternatives. Am I reading you correctly? If so, that's a pretty major point which suggests that the general-purpose programming community may be gearing up for a wild goose chase.

pavelludiq 16 years ago

Im all for heresy, even though i don't agree with the author, his heresy is useful, we must avoid programming religions. It at least helped me understand a cultural problem programmers have at the moment with FP. The downfall of OOP was that it was misunderstood by all the imperative programmers. Even though it was adopted, it was misused. The downfall of FP will be that it is misunderstood by all the oo programmers, if it gets adopted it will get misused.

I never found FP hard to learn or use. I know now that it was because when i got introduced to it, i only had about a year of imperative programming experience and no OO experience at all. I was a rather fresh mind. My advice to all OO programmers willing to learn FP is to approach it with a fresh mind, it may save you a lot of headaches. Imagine you know nothing about programming, you may be surprised how close to the truth that is for some of us(including me).

dusklight 16 years ago

Why is our time being wasted with this article.

The author himself says that he is too dumb to understand functional languages. He knows so little that he keeps carping on about concurrency, when that is nowhere near any of the central reasons why functional is valuable. Better support for concurrency is an accident, a side-effect. What makes functional important is the ability to create living abstractions, and understanding how functional allows you to do that and why it is important makes you a better programmer in any language.

There is no heresy here, only ignorance. The author is basically saying functional languages are "overrated" because he is unable to understand them. There are definitely valid arguments to be made against the functional paradigm, but he has made none of them.

jcromartie 16 years ago

> I think the downfall of some of these languages is that they get wrapped up in being "functional" rather than focusing on solving a particular class of problem. They abandon convention and common sense for complex type systems and functional purity.

He's obviously not really looked at Clojure, then. The design philosophy of Clojure can be summarized as "functional programming is great; people also need to get stuff done."

kyleburtonOP 16 years ago

I don't think FP is all about concurrency - it's about more than that. In my experience it reduces the possible bugs in code (type inference / checking, referential transparency) - these aspects make concurrency easier to achieve, but that's not all there is to FP.

  • nirav 16 years ago

    Unfortunately though, FP is being sold as panacea of concurrency problems, at least in Java world.

    I think that FP is much more than that, it actually helps me solve problems in a very different and elegant way. I don't know how you classify this benefit but for me it was analogous to solving a problem with recursion or loop; while both can solve problems, recursion seems to be much more intuitive and elegant way - Same for FP vs Imperative approaches.

    • vamsee 16 years ago

      I'd agree to that, and solving problems in FP always made me think harder to get an elegant solution. Though, it also means that you're uncomfortably close to math. I think to fully exploit/understand FP, the programmer also needs to be good at understanding the mathematical models/formulas behind FP. Otherwise, I think he cannot truly exploit the potential of the tools he's using. I haven't spent a lot of time doing FP to justify that opinion fully, but whatever FP I did made me wish I took my math classes more seriously.

    • shasta 16 years ago

      I think it's more that shared mutable everything can't possibly scale in parallel, rather than that FP is a panacea.

dman 16 years ago

I would have loved this if the blog post contained more technical details and some code snippets about the tradeoffs involved. Functional languages cover a broad spectrum. There are often multiple concepts being explored in a language, then there is the issue of cultural heritage of the respective communities, and the fact that FP languages have traditionally had multiple implementations with different tradeoffs. as an example -> lisp and scheme are functional without being immutable. scheme favors a small spec while lisp includes large number of inbuilt primitives. Clojure introduces persistent datastructures and interesting new concepts like agents and integration with the jvm. I know too little about haskell and ML to make any informed comments about them but they appear espouse static typing unlike lisp / scheme. So if you have a particular beef about a functional language, look further and you will find one that doesnt share those traits.

dkarl 16 years ago

The languages are just too complex, too terse, and absolutely full of academic shitheaddery. I mean, seriously, what the hell.

(Where "seriously" and "hell" are links to the Haskell and O'Caml language documentation.)

God oh god I wish every language I used could be specified as precisely as the Haskell example. I didn't bother figuring out what the notation meant, but if I used Haskell, I could afford the time to understand it. Seriously, Perl and C++ have equivalent complexities; you just aren't expected to understand them. Experienced programmers steer clear of unfamiliar constructs, which works well enough, but it would be so much nicer to actually understand stuff.

jeb 16 years ago

The future of programming is when both my mum and my little brother can write small code that does some particular task for them. Neither of them will ever understand functional programming. That's basically the answer to what paradigm will win.

  • singular 16 years ago

    The problem with this is that once you get beyond really, really trivial code for whichever landscape you happen to working with (this is defined by the design of the language - whatever it's designed to be able to do easily), things get hard and you end up doing something difficult anyway, we label this task 'programming'.

    No silver bullet. Getting computers to run code correctly (for whatever version of 'correct' you subscribe to) is hard, there is simply no way round this.

    It's almost like saying 'the future of engineering is when both my mum and my little brother can build cars. Neither of them will ever understand mechanics.'

    When you think of it that way, the idea that programming is for everyone seems a little silly.

  • loup-vaillant 16 years ago

    > Neither of them will ever understand functional programming.

    What makes you think so? What fundamental complexity in the paradigm is out of their reach? Do you really think that the concepts of imperative and OO programming are simpler? Why?

  • CodeMage 16 years ago

    Do you honestly believe that? The history of programming is littered with failed attempts to do that. For example, does "Microsoft Acess" ring any bells?

    The future of programming is definitely not to turn users into programmers.

    • paulgb 16 years ago

      It's true that none of those attempts have created a language that everyone can use, but many of them have lowered the barriers. You might not use Access, but a lot of people do. And a lot of people who can't grok the complexity of C++ can put together simple GUI apps in Visual Basic to accomplish simple tasks.

      I'm not suggesting that everyone will ever be able to implement quicksort or write a parser, but there is no fundamental reason that high-level programming will always be inaccessible to the masses.

    • loup-vaillant 16 years ago

      Oh yes it is. Users should be programmers. All of them. Just so they can write those little scripts. It's a question of independence.

      You probably wanted to say that the future is not to dumb down programming to the level of my "mom". I agree with that.

      • CodeMage 16 years ago

        Maybe I have a weird family, but my mom does not want to write any scripts. She just wants the damn thing to work.

        For her and many other people I know, it's a bit like driving a car. If you want to drive a car, you have to know certain basic rules and that's it. The guys who do maintenance and repairs are the ones who know what happens under the hood and you take your car to them whenever necessary.

        I also know lots of people who know what happens under the hood and love to tinker with their cars. I'm not one of them myself. I do that with computers, but not with cars. I don't see why computers should be a special case where everyone has to know how to tinker with the "stuff under the hood".

        • loup-vaillant 16 years ago

          First, what we want an what's good for us are two different sets of things. Especially when you don't have total information.

          Second, programming does not always mean tinkering stuff "under the hood". Advanced spreadsheets are front-end and programming at the same time. Even that:

            $ cat * | grep "groceries" | sort
          
          is a program (though a rather trivial one). "Real" programs will still be professional and hobbyist stuff. But scripting can be everyone's business.

          Third, computers are fundamentally different from any other device: they are general purpose. They can process any information you care to feed them, in any way you care to program them to.

          Finally, when our moms want to do something the computer can't presently do, but could, they have 3 alternatives: give up, acquire a program that does the job, or program it themselves. For many many jobs, acquire a program is the only viable solution. But for small, specialized tasks, the existence of a dedicated program is less likely. So if our moms want the damn thing to "just work", they have no choice but to program it.

          Knowing how to use computers (and the internet) is becoming as important as knowing how to read. Because computers are general purpose machines, knowing how to program them is an important part of knowing how to use them. It's a big investment, but so is reading.

  • jcromartie 16 years ago

    This never works out well. Maybe you have some ideas for what will let average people finally "get it?"

    • loup-vaillant 16 years ago

      Teaching them in kindergarten. No kidding. We may stop at the basics, but every child of 12 should know how to make simple toy programs in a toy language, so they can feel that computers aren't magic.

      • jcromartie 16 years ago

        I did Logo in 3rd grade, and I can say it was a good thing for me. You might be on to something there. I have always felt that the first thing any computer eduction should entail is drilling the following into peoples' heads: computers are machines that only do exactly what they are told to.

      • paulgb 16 years ago

        I'm three years into a CS degree, and have studied computers at every level from the logic gates to operating systems to high-level languages. I'm still not convinced that computers aren't magic.

        • loup-vaillant 16 years ago

          Well, they are magic, in a sense: you scribble weird formulas on them, and that makes them do stuff. In every other way, it is not. Computer obey understandable rules. You don't need aracne knowledge to grasp the basics. It's not dangerous to know a little of the thing without dedicating your life to it.

          Way less scary than actual magic.

          • paulgb 16 years ago

            I agree fully. I think what's magical is that even though I understand computers at all these different levels, I'll never be able to hold all the parts in my head at once, so at any given time I'm left believing that some part of the system involves magic.

            In another sense, I think there's "magic" in the sense of "wonder". Even knowing how some piece of technology works, I find myself constantly in amazement and wonder that it does work.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection