What killed Haskell, could kill Rust, too
gist.github.com> As one might have guessed, this is not an essay. It's a transcript of the following talk by R. Martin with some substitutions made (SmallTalk -> Haskell, Ruby -> Rust, and others). You are free to make any conclusions from this.
No wonder it felt like the author was living in a different universe.
Title is trolling
> It's a transcript of the following talk by R. Martin with some substitutions made (SmallTalk -> Haskell, Ruby -> Rust, and others).
That talk is https://www.youtube.com/watch?v=YX3iRjKj7C0.
Another thread on this article: https://news.ycombinator.com/item?id=24449927
The reason why Haskell still doesn't really break through in the corporate world is not a failure of arrogance, but a marketing failure in the strictest sense of the word 'marketing'—which means prioritizing the right features to target the right market.
A large proportion of the talk linked below is based on the book “Crossing the Chasm".
The book was originally written for startups trying to break into the mainstream, and this talk adopts that book to programming laguage (or tools more generally), particularly regarding Haskell.
Gabriel Gonzalez – How to market Haskell to a mainstream programmer: https://www.youtube.com/watch?v=fNpsgTIpODA
Gabriel also suggests that in order to "cross the chasm" you have to offer a best-in-class experience in some area, and suggests that Haskellers adapt the language to facilitating building interpreters, which is it particularly well suited for.
So, base on the above, I don't think the OP's emphasis on the emotional analysis of Haskell programmers is anything more than the OP's just-so story, although the notion of parochialism may have some crossover with the concept of marketing.
It seems strange to consider Haskell as dead when its motto was "Avoid success at all costs". Haskell wasn't popular in enterprise settings because it didn't want to sacrifice being a good language for being a popular one. Haskell was/is very successful if you base it on it's own terms. Now, that goal can be debated, it could be argued that an unpopular language is never that good because it is unpopular, but that's a different argument.
> it didn't want to sacrifice being a good language for being a popular one
I think this is the key. The motto that you mentioned is often clarified/refined as something like "avoid 'success at all costs'".
Exactly. Haskell has explicitly been run and planned around not being successful because real production usage and success will require the maintainers to limit breaking changes, for example, which defeats its purpose as a language for experimentation and research.
This. It's the next ALGOL. Let the next dozen popular languages cherry pick the good parts from it and don't worry about people using it directly.
It’s bittersweet to vaguely know a good language I’ve never been able to use professionally.
I would say tooling is the main reason haskell is not adopted even in personal projects much. The package manager used to be pretty bad, poor editor support is still the biggest obstacle for new-comers. Combine that with the fact that students are taught a procedural-style language in their school and you have small percentage of new-comers looking at it. Rust have an advantage here since it is procedural-ish language.
But these problems require money and expertise and marketing from people who already are in the field, so a bit of chicken-egg problem.
Arrogance is the last thing i would say about haskell community and i have interacted with them on reddit and irc. So not sure about that.
The thing about ignoring enterprise needs is that the enterprises need to invest in the things they want. Those that do contribute are using haskell in enterprises that fit their needs. Otherwise it is primarily a research driven language.
It's not very surprising that Go/Java is doing very well, looking at the investment done by the companies behind it.
I think Haskell tooling is pretty good now. VSCode with the Haskell plugins is simple to setup. I used to use Emacs for Haskell, but the VSCode support is now so good that I just take the easy route.
Stack and stackage made package management much easier for me. I am an enthusiastic, but not great Haskell programmer so I like running the ‘hlint’ linter program for hints on improving my code.
My heart is really with Common Lisp, but Haskell is also a pleasure to use and the tooling seems much better to me than five years ago.
I think Haskell versioning is highly bifurcated.
You mean like SemVer vs the default whose name I can't remember?
I mean like installing and managing multiple versions of the compiler toolchain is painful.
I mean, it's all pretty well automated by stack, the only painful part is that it can take a long time and use a lot of storage.
Sure it's pretty well automated, but not a great experience.
Is there something about ghcup that doesn't do the job for you? You ask it to install a particular version of GHC and Cabal for you. It does it. Multiple versions can exist together. Job done.
the default is pvp [1] and I think most packages use that and not SemVer? or, at least, I can't remember off the top of my head of specifically pinning a package with SemVer.
I pick Haskell back up every few years for a new project or two, and it seems like I have to play whack a mole to figure out which editors actually work and can do something more interesting than syntax highlighting. It’s not uncommon for an editor to just stop working, which is usually when I start asking myself why I didn’t just go ahead and use Rails instead.
Is it so hard to find Leksah?
I’m far from an expert Haskell programmer, but I have written it for money and had the code run on big fleets.
Haskell is really cool, but IMHO is held back from practical adoption by two things, one technical, one cultural:
- Lazy by default maps weirdly onto von Neumann machines. Laziness is cool but doesn’t pull it’s weight as the default. Performance in time and space is even harder to reason about than it already is on modern architectures, and debugging is...different.
- GHC extensions are like crack for the kind of people who overdo it with C++ templates. Pragmatic, legible Haskell in the RWH tradition isn’t safe from the type-theory zealots even in a production system. Cool research goes on in GHC, but it’s hard to keep everyone’s hand out of that cookie jar in prod.
I've briefly looked into the implementation details of lazy languages and yeah, while that Spineless, Tagless G-Machine is a pretty neat thing, it results in rather unorthodox code representation.
And yes, reasoning about memory consumption and having each library give you 2 versions (strict and lazy) of their data structures is tiresome. Why can't strictness/lazyness be parameterized away?
Whole opinion piece is one long mixed metaphor. The programming styles of Haskell and Rust couldn't be more different. Fails to even mention the generational differences in language design that have occurred since the 1990s. Haskell was inspired by Miranda, Rust has its influence from C++ and Standard ML languages.
So what killed Haskell is the parochialism, the inability to address the needs of the Enterprise.
I don’t think so. What killed Haskell in mainstream programming is its weird syntax. Rust looks like C so it passed that first sanity check. People hate that something so aesthetically basic could define the success of programming languages, but it seems pretty clear at this point. Languages have to basically look like C in order to be popular.
What killed Haskell is its obsession with monad-as-burrito tutorials that had nothing to do with getting anything useful done, as well as considering writing a PDF in Latex as an acceptable substitute for a blog post or documentation.
Nothing in Haskell was sufficiently interesting to justify overcoming the conceptual barriers and different culture.
The issue with monad tutorials is that they start at the top - trying to impart a generalizable understanding of the concept (which, short of category-theoretic explanations, requires hopelessly leaky analogies) instead of focusing on the purpose and usage of specific monads.[1][2] Abstract concepts must be first made concrete in order to understand them; in the case of monads, it's best to just look at the type signature for specific monads' "bind" (>>=) functions, as well as examples of usage, while actually using them in (permissively typed) code, rather than trying to connect burrito analogies to real life.
[1]:http://dev.stephendiehl.com/hask/#eightfold-path-to-monad-sa...
[2]:Regular expression tutorials, by contrast, virtually never attempt to explain regular languages or automata theory, which is why nobody complains about having to learn formal language theory in order to use `sed`.
A surprisingly digestible (and concrete) explaination of monads (Ch 2): http://www.cse.chalmers.se/~rjmh/Papers/arrows.pdf
This
They always go for the mathematical way of explaining stuff (this is not a compliment). Programmer English please, not Alien Math. (And that's me saying as someone who likes Alien math in general. But I just don't see the point in using it when programming).
Ok cool, your programming language now has two worlds, the functional world and the imperative world and they have two different syntaxes and you have to pass your world as an extra parameter and etc etc
And I personally find the syntactic sugar built on top of it confuses more than helps (ok, some of it is just Haskell being Haskell, but it confuses)
Speaking of alien math. I always thought Haskell looked like the countdown sequence in Predator when the Predator sets the self-destruct sequence. It was very off-putting and scary when I first saw it.
Here is an another reason it failed... I have a CS degree and I barely understand what was said here. just being honest... maybe I just need more coffee.
Purely anecdotally, I've found haskell-centric spaces to be pretty hostile, which definitely did not help getting me interested.
Interesting. Would you mind expanding your anecdote, explaining which spaces and what you found hostile? Knowing that would help me understand how to improve the community. Thanks.
/r/programming, /r/haskell, this site, interactions with real people at Haskell meetups and normal programming meetups.
I'm very sorry for your bad experiences.
If you are comfortable giving some examples of that bad behavior and what it looked like I'd be happy to watch out for it and call anyone on that bad behavior so others don't have to experience it.
I just noticed this comment. I appreciate your and tome's obvious concern for the community you're a part of, and I hope very much that you both continue to enjoy it and thrive. I mean you no ill-will and wish you best of luck.
> Rust looks like C
I program in C for a living, and no, Rust doesn't look like C. In fact, one of the things that keeps me away from learning Rust is that it is so complicated in terms of syntax.
Today I was reading this HN entry (https://news.ycombinator.com/item?id=24404628) and there is a little Rust code snippet towards the end, and I was thinking "this is so fck'd up, I'll never grasp Rust".
> Languages have to basically look like C
I wish they do.
Rust has more syntax than other languages, but it's a cleverly designed language: every bit of syntax has one unique, clear meaning. Every bit of Rust syntax is intentional, and the official Rust documentation helps explain the reason behind it, with the reason usually being "the programmer is forced to make a choice that is usually implicit in other languages, and we want the code to explicitly show the choice the programmer made".
I love the intention and the motivation behind the reasons they choose to define new syntax, but in what syntax regards, I tend to prefer less, not more. To me, things aren't clearer by adding more text or context.
"...perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away..." - Antoine de Saint Exupéry
You're overthinking it. Rust's syntax is not that complicated, even if it looks a bit scary before you read the book. For C programmers the real hard part is in semantics of references, which C programmers mistake for general-purpose pointers, and gloss over ownership.
Rust has an intentional well thought-out approach to what it makes explicit[1], which is useful in a language that is focused on control, correctness, and performance. It's not trying to make source code beautiful, it's trying to balance usability of the language with avoidance of surprises caused by implicit compiler magic.
[1]: https://boats.gitlab.io/blog/post/2017-12-27-things-explicit...
> Rust is ... so complicated in terms of syntax.
> there is a little Rust code snippet towards the end, and I was thinking "this is so fck'd up, I'll never grasp Rust"
After you mentioned it, I had to try to read it, and not programming in Rust, I'm surprised that there was a need to "clone" the "references" to the single value which would be accessed by each spawned thread using the "Arc" "reference counter."
A question for those who really know Rust:
As it is a single value, in C or C++ I could do the atomic access without any "cloning" and doing "reference counting" with something like "Arc" and it would still be safe. What am I missing? Was it necessary in Rust or was it an example of a demonstration that did more than actually needed?
For me, it's always more than just syntax that is to understand.
In C++ terms, Arc is a shared_ptr. clone bumps the reference count up. You only need to do that if you want another owner. If you don't, you can take a regular old reference to the contents of the arc.
The core issue here is that it's not clear (from the snippet, but also to the compiler due to the type signatures) when the threads join. If they're joined in the same scope, they could use "scoped threads" instead, which would remove the need for the arc alltogether.
> it's not clear (from the snippet, but also to the compiler due to the type signatures) when the threads join.
Thanks. Assuming the join indeed immediately follows the snippet, what were the needed changes in the type signatures in this case? I’d like to know the “good” and “idiomatic” example (which avoids doing more than needed) for that example.
I kept the variable names similar to hopefully make it easy to see the transformation. Here, instead of using thread::spawn, we use a scoped threadpool. Effectively, rather than saying "take this closure and run it on a thread" like thread::spawn does, this uses "scoped.execute", which is like spawn, but ties the lifetime to the variable "scoped". This lets the compiler understand that all of these threads will be joined inside the given scope, and so it's able to grok the lifetimes. That variable is zero-sized and so will compile away to nothing.use scoped_threadpool::Pool; use std::sync::Mutex; const N: u32 = 3; fn main() { let mut pool = Pool::new(N); let data_mutex = Mutex::new(vec![0, 1, 2, 3, 4]); let res_mutex = Mutex::new(0); pool.scoped(|scoped| { for _ in 0..N { scoped.execute(|| { let mut data = data_mutex.lock().unwrap(); let result = data.iter().fold(0, |acc, x| acc + x * 2); data.push(result); *res_mutex.lock().unwrap() += result; }); } }); println!("{:?}", res_mutex); }(You still need the mutexes because multiple threads are writing to the same variables at the same time; imagine if we didn't push the result onto the vector; we could drop the mutex around it entirely, which would simplify things even further. See the example here, where because we are accessing disjoint parts of the vector, we can use no mutexes at all: https://crates.io/crates/scoped_threadpool
Thanks!
> You still need the mutexes because multiple threads are writing to the same variables at the same time
I understand that for the "data" for which the push exists. Do we however need it for res_mutex when the value that we want to update is a single (I guess integer) variable for which atomic add could be performed? Is there something like atomic add?
Oh, duh, yes. one line added, two lines changed, I've left them at their indent levels and left the variable name the same:
use std::sync::atomic::{AtomicU32, Ordering}; let res_mutex = AtomicU32::new(0); res_mutex.fetch_add(result, Ordering::SeqCst);
I think the snippet is part of the example code for Mutex. Here: https://doc.rust-lang.org/std/sync/struct.Mutex.html
After "It is sometimes necessary to manually drop the mutex..."
Ah ha! I should have recognized it, given that I'm pretty sure I wrote that, haha.
Incidentally, this is one reason why I've been advocating for a return of scoped threads to libstd; I can't change this example to be the "good" one, because we don't refer to external packages and we don't have scoped threads built-in.
Clojure does not look like C and requires some pretty tough mind-bending, especially if you have been programming for 30+ years like I did when I started. Still, more people use it than Haskell. But it has an enterprise-friendly environment (all Java or JS libraries, or both) and a super-friendly community.
Come to think of it you could take the arguments made here and replace Haskell with Perl, and then also replace Rust with Haskell and it would still make more sense.
I worked with a perl backend for years. The frame work was very good but perl was horrible. I hate the syntax and really hate having to reference and dereference. It is not that hard to do but it's just so annoying.
Perl was mainstream and exists in many corners. Haskell was never mainstream.
except that 15 years ago, Perl was everywhere
I disagree that syntax killed Haskell, but I agree that the Haskell syntax is particularly bad.
First, it’s the first language I’ve used in any capacity where I can’t mentally map the syntax. The orders of precedence are way too complicated, and the end result is that I end up blindly throwing $’s at the code until eventually it works. I have no idea why it has to be this hard.
Second, indentation. Seriously, indentation. Haskell is one of those accursed languages, along with Python, where indentation is apparently just utterly broken. I have yet to see A Haskell editor that can offer much more than the ability to cycle through the possible indentation levels given the context. It’s maddeningly bad, and often most editors can’t clean it up after the fact either.
Does python look like C? Its doesn't to me but w/e
This is exactly what happened to me: I did not take a further look on Haskell just because its syntax seemed so weird.
That's like saying you don't want to learn things you don't already know.
With that kind of attitude, you're not going to learn very much. You should take the opposite approach: be more interested the weirder the thing is.
Being interesting is not enough when you are picking a language for your next major project.
I've said as much there in the comments already, but this gist doesn't have many useful insights.
The problem is that highly valuing "first-principles thinking" is wrongfully attributed to arrogance and seen as "ignoring truths from the 'dirty' mainstream".
After scanning through the linked gist, it looks like the author was trying to discuss your objections in good faith but you kept on trying to provoke him.
That kind of behavior is borderline toxic, and I hope it's not representative of the Haskell community.
I was trying to reason with the author in good faith as well. Part of that means questioning large claims with no evidence.
If you're going to accuse me of being toxic, please be explicit with quotes and what specifically was toxic rather than unconstructive allegations.
Well, you argued back-and-forth on the gist for a few hours, wrote a comment on HN disparaging the gist, and then begin arguing with another random, unaffiliated commenter about your behavior in the gist. That's far from constructive and, I think, rather close to toxic.
Let me rephrase what you wrote: "Someone had a long conversation on the Internet about a topic that good hackers would find interesting, came to a conclusion, and then shared that conclusion with others."
That's not just not toxic, but is the whole purpose of this website.
Are you saying it's toxic that I argued with multiple people in multiple places to verify whether their claims were true or false?
>Some of you might remember the Reddit discussions in the mid 2000s. A bunch of people were there. And they were talking about cool math things there. In those talks, they often were snickering about other languages like Go.
Golang was publicly announced at the end of 2009.
My personal experience with Haskell hackers went like this:
-I used to hang out at lambda-the-ultimate, in which haskell was considered god sent and c++ a toy language. At the time, I was heavy into writing simulation software, which meant c++ all the way, so I tried to understand haskell and its advantages over c++, but my attempts ended up being mocked. I stopped visiting that site.
-I tried to open some debates about Haskell on reddit, where someone told me they will answer my haskell questions only if I told them a specific mathematical definition on some properties of functions. I didn't know the reply, I was a programmer after all, not a mathematician. Said person left me in the cold.
-again on reddit, I was trying the debate the usefulness of haskell regarding simulation software, which relies heavily on updating variables, but I never got straight answers. To this day, I still do not know if haskell code actually can update variables in place or it only simulates updating of variables. Someone suggested using lens, and although I understand the abstraction, it still has not solved my question if haskell can update variables in place.
Why I was so interested in this aspect, i.e. in place updating? I wrote simulation software and after that game software for a living, and it matters to me because I want to be able to reason about the performance of my program. I don't want to have data be duplicated behind my back.
To cut the long story short, my haskell questions go unanswered to this day, I was very disappointed, and since I have great experience in imperative programming languages, I usually write programs that are correct as soon as they are compiled, which is what the haskell advantage is supposed to be. So I don't see any benefit from haskell, and I won't recommend it to my company or my colleagues.
Am I wrong regarding haskell? perhaps, but I am not interested any more in 'trying out' languages of dubious gains, nor am I interested in dealing with juvenile behaviors. I tried Rust because it had a serious advantage over C++ (namely, object lifetimes), and I will recommend Rust, but my patience for and interest in haskell is virtually non-existent at this point.
I personally haven't had the same experience as you (I personally find the Haskell community quite pleasant, especially in the functional-programming slack server), but I'll try and answer (probably in an incomplete way) your question about updates.
Haskell can, in fact, update variables. Mostly through 2 mechanisms:
- ST (a computation containing local, mutable state, that cannot escape its scope) [1]
- IORef (mutable, thread-safe variables that only work in IO). [2]
The other (and usually more common way) of doing "mutable" state in Haskell is through State, which technically doesn't update the variable in-place, but simply modifies the variable and passes a copy to the rest of the computation (although, as far as I'm aware, a lot of the time this step gets optimized away).
That said, if your main field of expertise is simulation programs where performance and space efficiency are very important, then Haskell is probably not a great fit (cause not only is it based on a GC, it's also lazy, which can sometimes mess with the performance of your code, not speed-wise but memory usage-wise). Hopefully this could be mitigated in some way in the coming years when Linear types become viable for efficient resource usage (Linear types ideally could grant us some sort of Rust-like resource management)
[1] https://hackage.haskell.org/package/base-4.14.0.0/docs/Contr...
[2] https://hackage.haskell.org/package/base-4.14.0.0/docs/Data-...
As one who wrote a couple hardware simulators in Haskell and otherwise, I have to disagree with you.
The very immutability of Haskell values means one has to use transactional logic - given current situation, compute situation at next step. C++ and many other languages tend to update values in-place and that brought me a lot of bugs to debug and, subsequently, made me use Haskell.
Laziness plays a critical role in free composition of parts of the system.
Let me present you some examples.
Single clock domain computing hardware can be thought as computations that compute values for every raising edge of a clock. It can be described as a function from infinite list of inputs to infinite list of outputs. The adder, for example, is just like this:
The register is a thing that produces values remembered from the previous clock cycle. The very first value comes from reset:adder :: Num a => [a] -> [a] -> [a] adder = zipWith (+)
Mealy machine allows you to apply a function that transforms input and internal state into output and next internal state. Mealy machine can be used for description of all kinds of things, from register files upwards.register :: a -> [a] -> [a] register = (:)
That's it!mealy :: ((input, state) -> (output, state)) -> state -> [input] -> [output] mealy f resetState inputs = outputs where -- here comes shortcircuiting that relies on laziness: outputsAndStates = map f (zip inputs (register resetState states)) outputs = map fst outputsAndStates states = map snd outputsAndStatesUsing regular map and other list functions and these two additions, one can simulate single clock domain hardware which amounts to almost anything that computes on silicon - within bounds of approximation; basically, one need to add delays for slower hardware somehow.
The trick with shortcircuiting above allows one to freely compose hardware simulation from different parts. You just put blocks there and they start to work. The function in Mealy machine is pure and total and can be tested (or verified) thoroughly in standard simple way.
From what I read here:
https://stackoverflow.com/questions/57489844/how-does-readio...
IORef does not allow a value to mutate, it allows a pointer to a value to mutate.
Am I correct?
Does the ST Monad work in the same way? or does it truly allow values to be updated in place?
IORef is a pointer to value that can be changed. IORef holds pointer to a value (boxed value) because most Haskell values are lazy.
Please look at unboxed vectors for another example: https://hackage.haskell.org/package/vector-0.12.1.2/docs/Dat...
You can create an unboxed mutable vector and read and update its elements. Vectors are stored as Structure Of Arrays (Vector (a,b) is transformed into (Vector a, Vector b)) and are very efficient in transformations.
Next to them you can find Storable vectors which allow you to store and update any values that have Storable class instance defined. They are for cases when you need Array Of Structures.
Continuing Vector example, ST monad allows you to create a computation that uses mutation internally and looks pure from outside - do new/read/write and then return freezed array. Apply ST monad runner and you get pure freezed array. IO monad allows you to pass that array between computations of different kind.
Sorry to hear about your experience. I hang out on Haskell Reddit a lot. If you link me to the particular Reddit threads in question that will help me understand the situation better and I'll try to prevent anything similar happening again.
To address your more specific question, yes, Haskell can mutate data in place. See, for example, https://hackage.haskell.org/package/vector-0.12.0.1/docs/Dat.... On the other hand, unless you've got another compelling reason to use Haskell one wouldn't tend to say that Haskell stands out as a language to implement that kind of thing in.
[EDIT: Having looked at your history of commenting about Haskell on Reddit I'm not surprised that people took offence.]
My seemingly offensive comments came after I received seemingly offensive comments from others. It's very rare that I start discussions by offensive comments.
I start discussions by disagreement though. If that is deemed offensive, what can I say.
For posterity can you link to the questions that went unanswered on r/Haskell? I'd like to do an analysis.
> I have great experience in imperative programming languages, I usually write programs that are correct as soon as they are compiled, which is what the haskell advantage is supposed to be.
Same here. It's simply not worth switching. And also Haskell is probably not going to provide much more productivity in lots of areas.
I'd argue if you can manage to write 10,000 line imperative programs that are correct as soon as they are compiled:
1) that's amazing, some of us aren't smart enough to do it without functional programming
2) I bet you could write 2-3x as much functional code and maintain the correctness. Reason being understanding doesn't require keeping track of mutable states in your head.
No one writes 10000 line programs, either imperative, or functional, without testing and running smaller parts of it. At least no one that I know of.
I have dealt with ML while doing my MSc in software engineering, and wrote my dissertation on it.
I didn't see any improvement regarding bugs and state.
What I have discovered in functional languages is that while in fp you are not required to keep track of mutable states, you are required to keep track of values passed in parameters.
In my humble opinion, those two things are equal in difficulty and consequences.
I had almost the same number of bugs in my ML application that I would have in the imperative program, but they manifested in different ways.
I don't think FP requires the programmer to keep less things in their head, provided that the imperative program follows some good principles, that is.
> In my humble opinion, those two things are equal in difficulty and consequences.
I'm not so sure, I'll give this idea some thought though.
> I had almost the same number of bugs in my ML application that I would have in the imperative program, but they manifested in different ways.
Can you talk about some of the different ways the same bug manifested?
I didn't say 'the same bug manifested in different ways', I said 'same number of bugs but manifested in different ways'.
I.e. the defect ratio was similar, bugs in my FP program were of different type than in the imperative ones.
What is killing Haskell, according to this discussion
* marketing failure
* lazy by default
* GHC extensions
* its weird syntax
* its obsession with monad-as-burrito tutorials
* pretty hostile haskell-centric spaces
* not backed by a large company
* bad university classes
* tooling
* average developers are not “smart” enough to learn it
* no payback for learning Haskell skills
We actually had a functional programming class in university, where the focus was on Haskell. The main impression that seemed to permeate the class was "Haskell is hard". There was a lot of grumbling. Nowhere near as much as about Prolog in Logical Programming though. They were both thrown into the same pile of "must get through this class" though.
I'm not really sure whether it was because of the way it was taught or whether Haskell is inherently more difficult. I do think that a class like that might've hurt the language's adoption among the students though. (Or maybe my impression is simply biased by my own experience.)
Rust will be fine, as long as people continue using it to produce high-quality tools like ripgrep (rg) and xsv
I think for a language to gain traction today in the mainstream, it needs to be backed by a large company. If the language is web technologies oriented, it's all the more likely. The reason is because lots of developers nowadays are working in web related stuff. And when it's backed by a big company like Google or Microsoft, kids will at least learn it to try and land a job at one of those companies. Also, these big companies will support many useful apis in that language which will attract mainstream developers as they can start building useful stuff quickly.
I wanted to learn a functional programming language and did a lot of research about which language to start with. Initially I looked at Haskell but ended up going with F# because the tooling was a lot better and the documentation from Microsoft was really good. Also, I read quite a few posts by those who worked fulltime, paid F# jobs. I loved the functional style. I think it's a shame that it's not mainstream. But then when I tried to do something non trivial, like network programming, I had to use OOP C# style types in F#. That was just fugly and made the paradigm look weak because OOP constructs could do it more easily than functional constructs.
Other than the elegance of the functional programming paradigm, it didn't seem to offer much in terms of jobs because I can't really see large numbers of people suddenly picking up functional programming (which isn't easy) and building massive applications specially with the whole cloud based SaaS stuff that's happening right now.
I'm learning F# now for similar reasons. I'm guessing that the industry is moving in a more functional direction. I want to learn a functional language to keep up. I also like the claims that you can lower runtime bugs with good functional style.
I think comparing Rust to Haskell for their "similar" success is kind of dumb. Everybody wanted to program in Haskell, yet nobody build a proper package management and/or compiler version management solution for Haskell up until very late.
If Haskell was indeed one of those "everybody wants to do it" languages, then making it easy to approach from the infrastructure perspective would be the first thing to do.
My theory, based on a lot of experience, is that average developers are not “smart” enough to learn it, beyond simple tutorials. By smart I don’t mean IQ, but rather drive, intellectual curiosity and desire for a challenge, that I think is required for learning something so different from mainstream languages.
What is lacking is payback.
Having developed Haskell skills, what can one then do with them? Who in the world wants to have any Haskell code written badly enough to offer to pay to have it done?
If Rust dies, it will die just exactly the way almost every language did: its adoption rate was two or more orders of magnitude too slow, and the world moved on.
This is not inevitable: there are many other ways languages have died. Ada had a formal spec, billions of dollars in development contracts backing it, many industrial-grade compilers, thousands employed coding in it. It died because, ultimately, it wasn't enough better than C.
PL/I died. Algol died. All the various Pascals died. Ruby is in sharp decline. Death is the natural course for languages. Overcoming death requires a near-miracle.
COBOL, Fortran, C, C++, Python, Java, and Javascript managed it. It is too early to tell about Go or Rust, but it is not looking good for Rust, just based on the numbers. Rust's originators hoped to displace C, but very few move from C. Most who might have moved on from C did before there was a Rust, and the rest like it for its flaws, the way rock climbers like cliffs.
Rust will take few from C++ because Rust is less expressive, by design. The gap widens with each Standard release.
The languages Rust can practically steal users from do not suffer from the memory-safety problems Rust is promoted as solving. They have other problems it could help with. Go and Java are pathologically weak languages, and Python is pathologically slow and un-parallel.
There are things Rust adherents could do to increase its adoption rate, but they seem, by all indications, supremely and aggressively uninterested in even trying any of them. HN buzz, which Rust fans have run up to stratospheric heights, is very far from enough to sustain a language. So, Rust's prospects are dimming even as its apparent popularity peaks.
I see Rust as being a very influential language intro bringing affine types into mainstream languages, which eventually made other language communities look up how to improve their approaches to resource management.
Examples being Swift, Ada/Spark, C++ lifetime analyzers, Chapel, ParaSail.
Then there are the Haskell, OCaml, Pony, Verona, Nim, D, Nim efforts on how to combine the best of both worlds GC (in whatever form) + affine types when needed.
But taking the world by storm and actually replacing C++ across Fortune 500's, specially given the contribution of many of them to ISO C++ and industry certifications, I see that as a multiple decades effort and even, as we can see from C++'s failure to take over C in certain domains, it is going to be a very steep uphill battle.
So in the end, we might just happen to get the usual mainstream languages with improved capabilities for managing resources, just like Haskell happened to bring LINQ to .NET world, followed by influencing Java streams and C++ ranges.
> Having developed Haskell skills, what can one then do with them? Who in the world wants to have any Haskell code written badly enough to offer to pay to have it done?
Are there any languages you can immediately answer these questions for? Are there any popular languages you can't answer these questions for?
I have some ideas, but I want to nail things down a bit more before attempting an answer if you don't mind.
About once a week I get a note from another recruiter offering, for C++ coding, north of a half $mil per annum from a certain hedge fund with a reputation for high turnover.
And of course we all get our monthly calls from Google recruitment contractors to come code Google's special subset of C++. Google doesn't offer a half $mil to everyone, but remarkably many do get it. I expect a few even get it for coding Haskell, although one may doubt that is what the req they were hired on called for.
I feel like Rust is doing pretty well for itself, in light of the VAST inertia that C and C++ have. A bunch of big names in tech are adopting Rust: (obviously) Mozilla, Microsoft, Cloudflare, Dropbox, IIRC.
That's not bad for a language that has "barely" hit 1.0 five or so years ago (next to C's 40-something years).
Rust is doing extremely well for a new language.
If it can increase its adoption rate by two orders of magnitude, it just might survive and grow. But it will need changes to get that.
The fact that all big OS vendors are doing something with it it is quite positive, however they also have their own managed languages, alternatives to Rust like safety projects and have a seat at ISO C++ table.
So I guess it boils down to how much space are they planning to give to Rust on their own SDKs, IDE and OS infrastructure.
Ultimately, it boils down to absolute numbers.
You need a large enough population of skilled programmers that you can reasonably count on finding enough that are ready to move on and good enough for your project, and enough ongoing projects to keep them all busy. The number of companies that have little projects doesn't figure, nor the size of the companies. A list of big companies using it is actually the least informative, because all it takes is one person using, out of the many thousands there, to say "the company" is using it.
It would be surprising if there were two hundred paid Rust jobs already, and astonishing if there were a thousand. It needs to get to a hundred times that to have a chance to survive, and in only a few years. Ada got there and died anyway.
Decent programmers really don't have that much issue learning new stuff. That's not to say I haven't met a good number of people who have only ever programmed e.g., Java, and poorly at that. But I wouldn't even hire them to work on a Java codebase.
I'd say that the amount of Rust work is the metric I agree with. I think Graham's "Python paradox" is true and correct. If you post a job position that includes working on Rust, you'll have people falling over themselves to apply. What you'd really need, IMO, is managers to get on board. It's a chicken and egg problem. If it isn't Java, PHP, Python, C++, then it's "risky".
We do use Rust for a few projects where I work. Entirely because I had enough social capital and reputation with my boss to push for it.
I don't do Rust full time. But what counts as a "paid Rust job"? I get paid. And I do Rust for my company. Does that count?
And I don't understand why you're asserting that it has to shoot up by orders of magnitude to survive. I feel like maybe you're being biased by something, but I'm not sure what. Look at Python. It existed in the early 90's IIRC, but it totally exploded around 2005-ish (again, IIRC). Haskell and OCaml exist. They seem to actually be picking up a bit of steam if you go by social media such as HN. Ada isn't dead.
Ada isn't not dead yet, in fact NVidia choose Ada in detriment of Rust for their automated vehicles project.
It is also one of the few languages that has managed to keep a room at FOSDEM since I can remember.
It might be dead for FOSS hype projects, but in real life production code that actually affect people's lives, still has more deployments per year than most Rust projects.
> drive, intellectual curiosity and desire for a challenge
literally none of these is related to intelligence.
And even if they did, to assert that not wanting to learn Haskell has anything to do with "drive, intellectual curiosity and desire for a challenge" is ridiculous. There are a million reasons a person that exhibits all three of those qualities could opt not to.
> literally none of these is related to intelligence.
GP didn't mention the word "intelligence"
smart adjective, smart·er, smart·est.
- having or showing quick intelligence or ready mental capability
They said "smart", I said "intelligence".
We could arguably account "drive, intellectual curiosity and desire for a challenge" as particular mental capabilities.
Every time Haskell is discussed, lots of valid critiques show up as to why it's not more successful, and yet every single time, without fail, the conversation devolves heavily into "People aren't smart enough." The person I was responding to literally said that Haskell wouldn't catch on because most developers aren't smart enough, and then defined "smart" as those three things.
I am arguing a series of things. First, that being "smart" does not always mean those three things. Second, that being "smart" does not mean that you'd care to learn Haskell. Third, to suggest a tool isn't catching on because the members of the programming community aren't "smart" enough is so masturbatory it's actually insane. God forbid Haskell isn't catching on because of all the valid critiques that show up in every one of these threads and then gets dismissed under this same "Haskell smart" rhetoric.
Haskell will not catch on because the community thinks it is too smart to have to actually accommodate the programming community. Simple as that.
> The person I was responding to literally said that Haskell wouldn't catch on because most developers aren't smart enough, and then defined "smart" as those three things.
The person used "smart" in scare quotes, and defined that usage of smart as being those things. That seems like an explicit mark that they are not talking about all of the usual definition of the word.
> Second, that being "smart" does not mean that you'd care to learn Haskell.
That's irrelevant. If X is necessary for Y, lack of X is a good explanation for lack of Y even when X is not sufficient for Y.
> God forbid Haskell isn't catching on because of all the valid critiques that show up in every one of these threads and then gets dismissed under this same "Haskell smart" rhetoric.
If you're looking in from the outside, you may not be in a good place to distinguish between "all of these valid critiques" and "invalid complaints that arise because of misunderstanding, dated info, or outright FUD". There are absolutely valid critiques of Haskell. Most of my problems with it are things that are even more present in languages that have caught on, though, so they cannot stand alone as an explanation.
All of that said, "people aren't smart enough" isn't a claim I'd make, even with the reduced scope. I just don't think your argument is well formed.
> If you're looking in from the outside,
It's weird, then, that the person that wrote the article that you and I are both commenting on has written a well regarded book using Haskell and their assertion is exactly the same as mine, no? It's also weird that the subject of this article is meant to be Rust, and yet here we are debating the idea that "Devs just aren't smart enough to understand haskell".
But, as always, I wouldn't expect a conversation about Haskell to really go anywhere. You can't comment unless you've drank the kool aid, and if you've drank the kool aid you're required to spout the same rhetoric.
> their assertion is exactly the same as mine
... their assertion was that the problem was the exactly the same unspecified pile of "valid critiques that show up in every one of these threads"?
> I wouldn't expect a conversation about Haskell to really go anywhere.
Many conversations I've had about Haskell - pro and con - have gone really interesting places. But as your experience has differed, and you appear to think that's predictive, I'll take this opportunity to remove that common variable by wrapping up discussion here.
Snark aside, I wish you well.
From me: Haskell will not catch on because the community thinks it is too smart to have to actually accommodate the programming community. Simple as that.
From the article: There was an arrogance in the Haskell community. Not the evil kind, but the kind that told them that they were somehow better. That the tools they were using were somehow better. That the things they were doing were somehow better. There was the arrogance of those people who believed that victory was inevitable. This was not the slapping your face “you, stupid fool golang programmers” kind of arrogance, although there was plenty of that, too. Instead, it was a kind of arrogance of power. Because the Haskell people were writing a pretty powerful code, they did have a tiger by the tail. It was a powerful compiler, it was a powerful language, and they knew they could work miracles. And yet, that wasn’t enough. Something insidious, something subtle happened. It caused their separation, they set aside the rest of the industry. The people outside the community who were writing everyday programs began to look at the corner of the eye where the Haskell people were doing: “Emm… Haskell people don’t seem to like us very much, I don’t think we’re gonna like them”. Some of you might remember the Reddit discussions in the mid 2000s. A bunch of people were there. And they were talking about cool math things there. In those talks, they often were snickering about other languages like Go. It wasn’t anything significant, it wasn’t anything evil, they were just snickering: “He-he-he, mainstream people, ha!”. But I was a mainstream golang guy at that time! I didn’t like that. And I’ve been dealing with language wars in the next couple of years. And I said to them at that time “Do we really want to have language wars on Reddit?”. And the interesting thing about it was not about what they were snickering about, because they probably had a right to do that. What was interesting about is my reaction. My reaction was defensive. My reaction was “Well, you guys, go ahead and do your Haskell thing, but I’m the one who gets real work done.” That’s the interesting division that got set up at the time. And it was fairly pervasive. There was an attitude among the Haskell community, and again, it’s not an evil attitude, not one that was born out of ill will. But there was an attitude that said “You know, our tools are so good, our language is so good, we don’t need to follow the rules. We can do something else. We don’t have to talk to other people. We don’t have to do the other kinds of programs.” Haskell people didn’t want to do the regular kinds of programs. They didn’t want to have to deal with the corporate database. They didn’t want to have to deal with the horrible schema that had evolved twenty years. It was just distasteful. And they found ways instead to do things like using category theory, and dependent types. They’ve built a wall around themselves, and they’ve chosen to live in a technological bubble. Isolated from the evils of the outside world.
> I'll take this opportunity to remove that common variable by wrapping up discussion here.
This conversation has continued to be the status quo.
> Snark aside, I wish you well.
Best of luck!
I don't think the author of the article has exactly the same experience as you. I think the article comes out of frustration of having had his favourite effect system "hierarchical free monads" looked on disfavourably in recent community discussion.
> Haskell will not catch on because the community thinks it is too smart to have to actually accommodate the programming community.
Like, you don't literally mean there are Haskellers saying "we're to smart to write beginner-friendly documentation haha" right?
Jokes aside, can you give an example of this?
To a larger sense (the Haskell community being kind of exhausting and toxic), there are tons of examples. As cliche as it sounds, Reddit is an integral part of programming communities in 2020 and /r/haskell is a pretty toxic wasteland. The responses to this very article on /r/haskell are a great place to find what I'm talking about at a general level.
Specifically, though, this comment is a pretty good example of what this article (and I, now), am talking about:
> Certain problems, like working with databases in the principle Haskell way, are still open questions (e.g. see effect systems). But to call a mere difference in approach "arrogant" is extremely arrogant in itself
Which points to a problem very specific to haskell, brought up in the article, which is "How do I actually get things done?" Which, according to that comment (supposedly in support of Haskell) even points out that something as obvious and boring as "using a database" isn't clearly defined in Haskell. Most programmers want to use a programming language to solve a problem. The haskeller's argument, I guess, is that Haskell tries to do that while also applying very strict constraints on how problems are solved. Great, right? Except that those constraints are so strict that even problems that aren't significant or meaningful are difficult/not well defined (like using a database).
So if the answer to "How do I get things done?" isn't "Like this" but instead "Haskell doesn't work that way", most programmers will consider this a nonstarter.
> Which, according to that comment (supposedly in support of Haskell) even points out that something as obvious and boring as "using a database" isn't clearly defined in Haskell.
They are using a very high standard of clearly defined. Haskell has production ready ways of accessing databases and working with them today.
> problems that aren't significant or meaningful are difficult/not well defined (like using a database).
If you feel database work isn't meaningful, Haskell provides "write plain parameterized SQL, get back results" type libraries too.
I disagree database access and the realm of ORMs is simple, which is what they are talking about. You can tell by how they say "working with databases".
Other languages disagree about the best ways to work with databases. I'm sure you've seen the endless raw SQL vs ORM debates.
> ... /r/haskell is a pretty toxic wasteland. The responses to this very article on /r/haskell are a great place to find what I'm talking about at a general level.
If anyone wants to read through that thread to see exactly how toxic /r/haskell is they can find it here:
https://old.reddit.com/r/haskell/comments/io3c11/essay_found...
> The author in question may be a bit of a cracked pot.
> I genuinely tried to read this and take it seriously but this is just the most cringe-worthy thing I've read in ages. I'm sorry.
> As a composition of prose, this little essay is just stylistically terrible and reallllly hard to read.
> A really dumb essay.
Just from the one post.
Do you think any frustration is warranted for all of that articles false claims?
I disagree with the cracked pot comment, that goes too far.
It is pretty cringe-worthy, is it toxic to point that out?
Learning Haskell is like learning Latin, sure you'll gain deeper insights into other languages and language itself, but you aren't going to be able to do much by just speaking Latin.
That's just untrue.
Saying it is very inconsiderate of the efforts of everyone writing real world Haskell software.
Facebook uses Haskell, Github uses Haskell, etc... the metaphor doesn't hold and means this is just FUD.
On top of that, you're actually talking to a working professional Haskell programmer whose company depends on it.
I agree, it's hyperbole stemming from the fact that there are 5 times as many job openings for roles that touch Rust than there are Haskell.
What do you use Haskell for?
I use haskell for all of my general purpose programming. Webdev/rest apis at work mostly.
Sometimes background services.
I'm not who you're responding to, but you've responded to my posts on this topic. I think, what they're trying to illustrate, is that Haskell is still an incredibly niche language. Sure, yes, there are jobs at a handful of big names, and then a handful of not big names and companies that found their success using it. But if you were trying to maximize your employability or open up new doorways in your career, you definitely wouldn't pick Haskell as the language to do that with.
Separating "I like and use Haskell" from "Learning Haskell will amplify your career and employability" is, I think, what they're going for, and I would generally agree with. "The exception that proves the rule" is a thing, after all.
Just want to let you know that I didn't downvote you, and was genuinely interested in what you use Haskell for, so thanks for the reply. Have you had trouble hiring developers that know Haskell?
> drive, intellectual curiosity and desire for a challenge
Reading a book on abstract algebra is going to give you more concepts of Type Theory than learning Haskell.
So you may say that learning Haskell is intellectually useful, yet there are more challenging purely theoretic concepts which are more useful than Haskell.
I would go as far as saying that learning Haskell is just a "<smarts> poor man's excuse for not challenging themselves enough in the areas that actually matter".
> Reading a book on abstract algebra is going to give you more concepts of Type Theory than learning Haskell.
What aspects of a book in abstract algebra introduce you to concepts of type theory, would you say?
The most basic ones: that objects have certain properties and those properties define how they can interact with other objects. How those properties get preserved under interactions and how one should think about it.
This is what no one wants to admit about functional programming
Here's a starting difference between the two:
Rust code uses "unsafe" and doesn't apologize about not being pure. Rust tries to minimize "unsafe" and encapsulate it, but acknowledges that it must exist.
What's the Rust equivalent of "A monad is just a monoid in the category of endofunctors, what's the problem?" I can't really think of one.
Maybe there's something in lifetimes (which can be pretty messy). Maybe: "Quit using doubly linked lists or the borrow checker will beat you senseless."
In addition, there are lots of alternatives to Haskell that are almost as good. That isn't true of Rust. If you need systems programming, you have C, C++, maybe Ada. Rust is trying to drive a fairly difficult wedge into that area, but if it succeeds, the alternatives are scarce.
I think you are onto something. I'm a small brained primate.
When I read "A monad is just a monoid in the category of endofunctors, what's the problem?" I feel that someone thinks they are being clever by being confusing and unhelpful.
But when I read "Quit using doubly linked lists or the borrow checker will beat you senseless." I feel like someones trying to be helpful.
Its confusing because that's the mathematical definition of a monad that you need to know category theory to fully understand. In haskell a monad is just a way to run some effects sequentially, it's actually really basic stuff that every programmer already has an intuition for.
What's the evidence Haskell has died? It seems to me that a fad has passed. The language is doing well. But the pretentious hipsters have moved on to their next obsession.
Rust? Well, talk about hip and groovy blah blah. About as great as C++. Full of complicated features, all pretty much focused on nitpicking, not expressivity. Could become hipster stuff for small projects. Enterprise? It's missing the point. There's no Machine Learning, no UI tools, no communications abstraction, no nothing. Just storage allocation without garbage collection.
Haskell will remain solid, but used by its aficionados. Rust will die as a niche language, and C/C++, sadly, will continue to dominate the low level coding, for better or for worse. Python and JavaScript will fill the UI centric space of Enterprise, with Python largely owning data centric stuff, and JS owning phone/tablet UX.
> Rust? Well, talk about hip and groovy blah blah. About as great as C++.
I hope Rust winds up as "great" as C++ in terms of popularity.
I don't need "web/crud enterprise" (there are a zillion languages for that) with it's infinite tower of abstractions, but I do need "embedded enterprise". And I'm really desperate for a language better than C for that.
> And I'm really desperate for a language better than C for that.
I got introduced at it with Turbo C++ 1.0 for MS-DOS in 1993, never cared for C ever since, other than when not given any other alternative to chose from.
"Full of complicated features, all pretty much focused on nitpicking, not expressivity."
What are a few of those features?
"A monad is just a monoid in the category of endofunctors, what's the problem?" is a joke about how the Haskell community often forgets not everyone has a PhD in category theory. It's not a real thing Haskell programmers tell people, in my experience.
I learned Haskell pre-Categorization, and modern Haskell tutorials make me sad. Haskell-without-mumbo-jumbo is a great expository language for learning about all sorts of sophisticated programming problem solving techniques, without having to deal with all the noise of Algol-derived languages.
I was taught “monads” as: “ok, you wanted mutability? Here is how we program imperatively in Haskell”. Whole thing just hung together.
> In addition, there are lots of alternatives to Haskell that are almost as good. That isn't true of Rust. If you need systems programming, you have C, C++, maybe Ada. Rust is trying to drive a fairly difficult wedge into that area, but if it succeeds, the alternatives are scarce.
In terms of purely technical ability to catch certain set of bugs at compile-time, Rust as a language is one-two generations behind ATS, that supports refinement types and theorem proving, whilst staying on the industrial side of the "(industrial) C <-> (academic) Coq" spectrum.
Haskell has unsafe methods in the standard library.
hackage.haskell.org/package/base-4.14.0.0/docs/System-IO-Unsafe.html
It also has a strict type-safety compilation flag that disallows these methods https://downloads.haskell.org/~ghc/latest/docs/html/users_gu...
> In Go, I can do amazing things with monads.
Can you tho?
I've assumed that rust is going to be killed by a "-enforce-no-mutate-single-owner" or similar flag to gcc (or some linter) that restricts C++ code to some subset/style that nails the majority of what rust brings to the table. IMHO, like many other language paradigms, once you understand the core tenents of rust you could write "rust code" in many other languages. The difference being that those other languages don't _yet_ have flags/etc to enforce a rust like style.
I wouldn't bet on it. Firstly, rust has value as a "C++ without the cruft" language even without safety as a USP, and secondly rust had to do a huge amount of design work to get the language to work in a way such that the kind of static analysis it does is reasonbaly feasible. C++ would need to adopt more paradigms and then create a subset of itself in order for this to work (there are such designs in the works like 'C++ Core', but they have already had to give up achieving the same level of safety as Rust gives).
Actually they do, it just happens to be WIP on clang and MSVC++.
CppCon 2019: “Lifetime analysis for everyone”
Though Haskell is nice in many ways, I would choose OCaml for a commercial project over Haskell any day. OCaml is ruthlessly practical and the culture around it is was one eminent pragmatism.
Ocaml is designed to get stuff done, and the tooling and ecosystem strongly reflect that.
If something was going to kill Haskell, it would OCaml, not Javascript or some other language.
I would feel very comfortable running a business on OCaml, Haskell, not so much.
> Though Haskell is nice in many ways, I would choose OCaml for a commercial project over Haskell any day.
As a counter-point I'd say the opposite: I choose Haskell :)
> OCaml is ruthlessly practical and the culture around it is was one eminent pragmatism.
Is this a plus? Practical and pragmatism seem to be unlaterally considered a positive, but when does "pragmatism" just start to mean "unprincipled"? There's much talk about when being too principled gets in the way of getting things done, but I'm wondering what your position on the opposite is here.
What are your thoughts on 1) Real-world thinking 2) first-principles thining and 3) the balance of each that is ideal in software-engineering?
> Ocaml is designed to get stuff done, and the tooling and ecosystem strongly reflect that.
Haskell was designed to get stuff done too, just maybe not quite the same way. If you ever had an inkling of "maybe people in industry are outright dismissing research too fast" you might like it.
>> If something was going to kill Haskell, it would OCaml, not Javascript or some other language.
> I would feel very comfortable running a business on OCaml, Haskell, not so much.
I'd feel comfortable with either. I agree it'd have to be something at least as strongly typed as Ocaml to kill Haskell as well.
I think I came off as too pro-Ocaml. I personally think OCaml has a ton of problems, and like you suggest, is unprincipled. Features are added with very little thought, and OCaml code can get very ugly trying to replicate the kind of power Haskell has built in.
But, all in all, I like OCaml more than Haskell. Laziness by default is a mistake and GHC's ever growing list of language variants is kind of stupid. Moreover, Haskell's monad based effect system is a lot less useful than people think it is. In my mind, the main benefit of tracking effects is to enable aggressive compiler optimization, not increased safety.
Haskell's effect system imposes a heavy burden on the developer, with little benefit.
Ocaml imposes almost no burden in doing whatever the hell you want, but the language is messy and full of decades of hacks and eye sores.
I'm one of the rare souls that actually likes OCaml's OO system and I actually think that the language would be a lot better if the standard library and other code leveraged it. As it stands, functors aren't expressive enough for many use cases and function as shitty classes.
Here's what I want from a functional language:
Something that looks like OCaml, but get rid of the functors, cleans up the syntax, supports HKT, supports function polymorphism, supports easy to write macros (ppx is so fucking shitty), and make it have Nim like OO call syntax. Every function that takes a type as it's first argument can be called as a method using the dot syntax.
Also, have the language compile to a native executable, and make the executable and runtime really really small. OCaml gets a lot about performance right, but the 20mb executables (no tree shaking) is unacceptable.
> OCaml is ruthlessly practical and the culture around it is was one eminent pragmatism.
Also, in case Ocaml turns out to be a better fit for any future projects I have a question:
What is the biggest advantage constituted in part or primarily from it's ruthless pragmatism?
I would say a rock solid runtime with a focus on consistent performance, a nice C ABI, and low memory usage. Haskell suffers in non-trivial use cases and the is very hard to profile. Laziness has never worked correctly and in my view is a big mistake. Laziness is a dead-end, despite all the elegant code it allows you to write.
Also the tool chain is miles better than Haskell. I'd even say that the build/package management and editor tooling is some of the best of any language.
What does it have to do with Rust? I mean Rust shows absolutely no sign of problems that Haskell had. It's very practical, learns from everywhere it can find a good idea, tooling is great, marketing is great, community is nice and inclusive.
I don't feel like I can take it seriously when it talks about Haskell devs on Reddit snickering about Go "in the mid 2000s".
Of all the Haskell programers I've met and interacted with both irl and online not one came off as "arrogant" (there is a programming language who's hardcore fans tend to be a bit more, uh, eccentric, in my experience, but im not going to name it at the time).
College professor who was the "biggest" Haskell proponent I've ever met, had a reputation among the department best described in units of micro-Dijkstra's, not undeserved mind you.
does it rhyme with crisp?
No, it rhymes with crust.
perhaps