My favorite Rust function
blog.jabid.inReminds me of Coq's definition of `False`:
`Inductive False := .`
i.e., `False` has no constructors and hence is an empty type.
Anyway, this means that for any type `A`, you can construct a function of type `False -> A` because you just do this:
`fun (x : False) => match x with end.`
Since `False` has no constructors, a match statement on a value of type `False` has no cases to match on, and you're done. (Coq's type system requires a case for each constructor of the type of the thing being matched on.) This is why, if you assume something that is false, you can prove anything. :)
That is made that way to mirror how logic usually is made to work in propositional logic.
I get the feeling this doesn't really get into the meat of what "drop" is. It seems you can't really explain why you "love" a function without discussing its purpose. Maybe I'm wrong, I'm only really an outsider looking in when it comes to rust, but it does fascinate me as far as its goals. I would go so far as to say that it will be important for systems programmers to know in the not too distant future (if it's not already).
Isn't it really only there in case someone needs to "hook into" the drop functionality before the variable is dropped? Please correct me if I'm wrong.
EDIT: Minor editing to clarify meaning.
The post isn't talking about the drop method of the Drop trait, which is used to hook into the drop functionally.
It's talking instead about std::mem::drop, a standard library function that drops a value before it would ordinarily go out of scope.
Yes, to do anything interesting, you need to implement the Drop trait, which causes interesting behavior to happen here.
An example of the implementation is all that's missing from the blog post.
I wouldn't say std::mem::drop acts like free at all, it's the equivalent of a destructor in C++. Mostly useful when you're dealing with manually allocated memory, FFI, implementing an RAII pattern, etc.
One cool thing about Drop (and some other cool stuff, like MaybeUninit) is that it makes doing things like allocating/freeing in place just like any other Rust code. There may be some unsafe involved, but the syntax is consistent. Whereas in C++ seeing placement new and manually called destructors can raise eyebrows.
I haven't used rust, so can you explain this to me:
If I do the rust equivalent of:
def add1(x):
return x + 1
x = 1
y = add1(x)
z = add1(x)
then will x have been deallocated by the first call to add1 and will the second call to add1 fail?[You can ignore the fact that I'm using numbers and substitute an object if that makes more sense in the context of allocating / deallocating memory in rust.]
Interestingly, the type of `x` actually does matter here in Rust! For most types, yes, passing something by value into a function will cause the memory to be "moved", which means that reusing `x` will be a compiler error. That being said, you can also either pass a shared reference (i.e. `&x`), which will allow you to access the data in Rust (provided you don't move anything out from it or mutate it, which would cause a compiler error) or a mutable reference (i.e. `&mut x`), which will allow you to access or mutate the data in `x` but not take ownership of it (unless it's replaced with something else of the same type).
However, a few types, including integers, but also things like booleans and chars, implement a trait (which for the purposes of this discussion is like an interface, if you're not familiar with traits) called Copy that means that they should be implicitly copied rather than moved. This means that in the specific example you gave above, there would not be any error, since `x` would be copied implicitly. You can also implement Copy on your own types, but this is generally only supposed to be done on things that are relatively small due to the performance overheard of large copies. Instead, for larger types, you can implement Clone, which gives a `.clone` method that lets you explicitly copy the type while still having moves rather than copies be the default. Notably, the Copy trait can only be implemented on types that already implement Clone, so anything that is implicitly copied be can also be explicitly copied as well
How does this implicit Copy trait interact with the Drop function? Doesn't that mean that the implicit copy would be dropped rather than the actual value for those types?
The compiler prevents you from implementing both Copy and Drop (i.e. a destructor). So dropping any copy type is a no-op.
Lesson learned from c++ "rule of five"? Where if you implement a destructor you must also carefully implement a copy constructor so that the two copies of the object don't accidentally refer to each others members in any way. Something that is much harder than it sounds like, leading to the now more recommended "rule of zero" saying just don't.
Rust doesn't have copy constructors per-se (it really doesn't have ctors in he C++ sense). Copy is a "marker trait", its operation can not be overridden and is always (semantically) a simple memcpy. The truth is that Copy is not a thing at runtime, it's only a compile-time restriction (when not present).
Clone would be what comes closest to copy constructors, and it has to be explicitly invoked.
Rust doesn't have copy constructors. By hard rule, any normal (non-Pin) rust struct can be safely moved by doing a memcopy of it's storage to a different, correctly aligned memory location. Structs that are Pin just can't be moved at all. Implementing Copy is just a marker that means that after doing the memcopy, the original can still be safely used.
This rule really helps both the compiler and the programmer to make moving and copying things effortless. It does prevent things like a struct holding internal pointers to it's own state. This is not bad on x86, because you can replace internal pointers with internal offsets, and the instruction set contains a fast reg+reg addressing mode, but can cost an extra instruction on many other cpu architectures. IMHO the rule is well worth it, although it is an example of a situation where Rust chooses to give away a bit of performance for sanity.
And that is precisely the reason for the restriction.
Yes.
Is there any kind of compile-time check available for this (e.g. BIG COMPILER WARNING when you pass-by-value something that lacks Copy)? Seems like a lot of unsettlingly Python-esque freedom ("read the docs and don't screw up") for a language like Rust.
No, passing a "moved" structure by value is perfectly safe, fine and normal. All it means is that you won't be able to reuse it afterwards unless the function you're calling returns it.
There is no "read the docs and don't screw up" because the program will not compile at all if you do it wrong, and the compiler will explain why.
It's very useful for data ownership e.g. move file or string, you can forget about its existence (you have to really). It's also extremely useful to encode "static" state machines, especially when they're attached to some sort of unique resource: on state transition, consume the old state (one type) and return the new state (a different type). Not all state machines can be encoded thus and still useful, but when they do it's really nice.
It is perfect fine to take by value something that is not Copy to transfer its ownership.
It is a compiler error to use the value after it was moved. And the compiler error is quite explicit about how to solve it
For example you can explicitly clone
let y = add(x.clone()); let z = add(x);What would be cases where the add() function need ownership of the x?
What the chance in real life that a function like that cannot simply use a reference?
In my learning Rust, my life became significantly better and easier when I started to references to borrow as much as possible.
That was just an example. But maybe 'x' is a BigInt type that use a memory allocation to store its contents. The ownership would be transferred to the return value
But again, what would be the drawback(s) of using a pointer instead of transfering the ownership?
Not rethorical, I'm genuinely curious as I used to struggle with the copy vs move decisions, and a bunch of issues with ownership which went all away when I started using pointer/borrowing everywhere.
The downside to using a mutable reference instead of passing around ownership is just that it would be awkward.
You'd have to write two extra lines of code to set up y and z separately from calling add(), and you'd have to make them mutable which is an extra mental burden.
And an immutable reference is bad because it would force extra objects to be allocated, wasting time and cache space.
Rust really doesn't do ("read the docs and don't screw up").
It's a very much a "bondage and discipline"-style language in the sense that unless you explicitly use "unsafe", you have to prove to the compiler that everything you do is safe. Moving large things is not unsafe because after you move something to a different scope, the original doesn't exist anymore and can't be used, so if you were to accidentally move something and tried to use it again, the compiler would helpfully tell you that the thing you're trying to refer to isn't there anymore.
There is no warning on every time you pass something without copy because passing things to different scopes is extremely common, normal and desired.
If add1 takes ownership of the argument, yes (and x is not implicitly copyable).
Compare with C++, in particular types with deleted copy operators (e.g. unique_ptr<T>). In order to call a function that takes an unique_ptr by value as argument, you must explicitly move the object into the function:
Linters (i.e. clang-tidy) can be configured to complain about this, but it's completely valid C++ (because move leaves the object in an unspecified but valid state). In Rust, the argument will be automatically moved in the first call, and the second call will generate a compile-time error.void foo(unique_ptr x) { ... } unique_ptr x = ...; foo(move(x)); foo(move(x));> because move leaves the object in an unspecified but valid state
Also and importantly because move() only indicates that the value may be moved from (it's a fancy cast), that doesn't mean it will be moved from. The called function needs to take an rvalue reference (T&&) for a move to possibly happen. That is if you had a
it may or may not move. Likewise you can call move() on things which are not move constructible (or passing the result to functions which don't care) with no ill effects (save to the reader).void foo(unique_ptr&& x)IIRC what happens here is that since foo() takes a unique_ptr by value the compiler will insert the construction of a temporary unique_ptr and overload resolution will select the move constructor (since we've provided an rvalue reference, and assuming the wrapped type is move-constructible).
> move leaves the object in an unspecified but valid state
Also notes that this is the general semantics, but specific types can define their exact behaviour here. In particular, unique_ptr specifies that a moved-from unique_ptr is "nulled" (equal to nullptr).
Normally yes, if using an object. In this case, the integer types implement the Copy trait, so instead of actually having your first call to add1 take ownership of x, it will just operate on a copy of the value, so your second call will work too.
It depends: if add1 borrows the value then this code is fine. If add1 takes ownership of the parameter then no, this will not work.
In Rust, you can tell whether something is borrowed from the call site, without having to look at the function being called. If a function takes a reference (`&T`), then the caller must pass a reference. There's no implicit by-referencing as in C++'s `&` types.
(I realize you, saagarjha, probably know this already -- this is more a note to other readers.)
What do you mean by "will not work"? Will it not compile? And how do we write that with add1 taking ownership vs. not?
You'll get a compile-time error explaining the situation, yes, including documentation links and possible fixes. The differences between the versions are minimal.
This works on the assumption that whatever "+ 1" really means doesn't itself need ownership of x, e.g. because it mutates it. If it does, then you'll get an error. Or you could do this:fn add1(x: Thing) -> Thing { x + 1 } fn add2(x: &Thing) -> Thing { x + 2 }
Which, presumably, modifies x instead of returning a copy. It's your job to make sure it makes sense to do that, but Rust has another trick in its pockets:fn add3(x: &mut Thing) { x.add(3) }let y = add1(x); // Works; takes ownership. let z = add2(&y); // Works, and doesn't take ownership. let zz = add2(&y); // So you can do it twice. (Y tho?) let h = add3(&y); // Compile-time error! let hh = add3(&mut y); // Works. You need to specify that you're fine with y being mutated.Consider the following code:
The first will take ownership of the thing you pass in, so you can't do something like this:fn add1(x: Vec<i32>) -> Vec<i32> { // Takes ownership return x.iter().map(|&x| x + 1).collect::<Vec<_>>(); } fn add1_borrow(x: &Vec<i32>) -> Vec<i32> { return x.iter().map(|&x| x + 1).collect::<Vec<_>>(); }
but the follow code is OK, because the function just borrows the value instead of owning (and destroying) it:let x = vec![1, 2, 3]; let y = add1(x); let z = add1(x); // error[E0382]: use of moved value: `x`
This is legal too:let x = vec![1, 2, 3]; let y = add1_borrow(&x); let z = add1_borrow(&x);
but as before you can't use x after this.let x = vec![1, 2, 3]; let y = add1_borrow(&x); let z = add1(x);(Note, I chose Vec here because Vec does not implement Copy: if I used i32 it would just copy the value in the non-borrowing case, which would make the code fine as no ownership would be transferred.)
It won't compile, because the compiler keeps track of where you moved the value and notices that x now has no value. If you want the function to borrow the argument instead, make it take a reference, &x.
Fail as in it won't compile, not as a runtime error
Wow, elegant. This was probably conceived as an idea during the design phase of the language, it seems right.
That same function appears on the second page of Philip Wadler's first Linear Logic paper, but it's called "kill" [1]
But I remember the words "drop" and "dup" being used since the early days of linear logic too. I believe they come from Forth, where they do pretty much the same thing! [2]
[1] http://homepages.inf.ed.ac.uk/wadler/papers/linear/linear.ps
Sometimes Rust programmers write this same function as a closure, which can be known as the 'toilet closure'[1]:
|_| ()
[1] https://twitter.com/myrrlyn/status/1156577337204465664Do variables go out of scope after last use or when the function exits? I could see the former evolving into the language if it’s not already the default behavior.
In which case there’s only one situation where I could see this useful, and that’s when you are building a large object to replace an old one.
The semantics of
foo = buildGiantBoject();
In most languages is that foo exists until reassigned. When the object represents a nontrivial amount of memory, and you don’t have fallback behavior that keeps the old data, then you might see something like drop(foo);
foo = buildGiantBoject();
Most of the rest of the time it’s not worth the hassle.It used to be at the end of the block, which caused all manner of annoyance. So they spent a lot of effort improving the borrow checker, and now it's 'after last use'.
It's not just a matter of memory use. References and mutable references form a sort of compile-time read-write mutex; you can't take a mutable reference without first dropping all other references. See https://stackoverflow.com/questions/50251487/what-are-non-le... for more.
NLL didn't affect when objects get dropped. That would be a breaking change. Things still get dropped at end of block (well, approximately... the exact rule is actual quite complex).
This is incorrect. Values still go out of scope and have their destructors run at the same time.
NLL only affects values without destructors.
I think `#[may_dangle]` is an exception to this, and the standard library puts it on many (most?) container types.
It is not an exception; #[may_dangle] does not change the time drop runs. All it does is promise that drop will not access borrowed data, allowing that data to die before drop: https://doc.rust-lang.org/nightly/nomicon/dropck.html
Not sure how Rust mutexes work but in c++ that wouldn't work. Obvious first example is std::lock_guard which is implemented by locking in constructor and unlocking in destructor. The variable itself never has any "use", it's just created and held alive as a dummy to denote the locking scope.
Now actually this is a quite nasty object with implicit global side effects which you should avoid in the first place, but for the mutex case i don't know of a better option, maybe Rust has a better way to handle this?
Rust’s mutex guard has similar semantics, but acts as a amart pointer to the data in the mutex: if you let the guard drop, you no longer have access to the shared data.
Variables go "out of scope" (in at least one sense) at last use, but are not `Drop`-ed (de-allocated, etc...) until the end of the function. The difference is important because of rust's rule against simultaneous aliasing and mutability. Consider this example:
Because b is a mutable reference to a, this means that a cannot be accessed directly until b goes out of scope. In this sense, b goes out of scope the last time it's used. _However_, AFAIK, b isn't actually de-allocated until the end of the function.fn main() { let mut a = 1; let b = &mut a; *b = 2; println!("{}", a); // prints "2" // *b = 4; // If this line is uncommented, compile time error. }Of course, it doesn't matter in this trivial case, because b is just some bytes in the current stack frame so there's nothing to actually de-allocate. But if b were a complex type that _also_ had some memory to de-allocate, this wouldn't happen until the end of main(). But in this case, b's scope also lasts until the end of main, which is kind of like adding that last line back in...
This can be seen in the following example, where b has an explicit type:
In this example, without the std::mem::drop() line, the implementation for Drop (i.e., B's destructor), B::drop would be implicitly called at the end of the function. But in that case, B::drop() would still have a mutable reference to a, which makes the println call produce a "cannot borrow `a` as immutable because it is also borrowed as mutable" compile time error.struct B<'a>(&'a mut i32); impl<'a> Drop for B<'a> { fn drop(&mut self) { // We'd still have a mutable reference to a here... // If B owned resources and needed to free them, this is where that would happen } } fn main() { let mut a = 1; let b = B(&mut a); *b.0 = 2; std::mem::drop(b); // Comment this line out, get compiler error println!("{}", a); // prints "2" }In other words, this "going out of scope at last use" is really about rust's lifetimes system, not memory allocation.
IMHO... this is one of the rough edges in rust's somewhat steep learning curve. Rust's lifetimes rules make the language kind of complicated, though getting memory safety in a systems programming language is worth the trade-off. There's a lot of syntactic sugar that makes things a LOT easier and less verbose in most cases, but the learning curve trade-off for _that_ is that, when you _do_ run into the more complex cases that the compiler can't figure out for you, it's easy to get lost, because there are a few extra puzzle pieces to fit together. Still way better than the foot-gun that is C, though. At least for me... YMMV, obviously.
There’s a number of fun C++ ones similar in spirit: std::move, for example.
std::move's implementation not quite as elegant, though (this is from the GCC source):
template<typename _Tp> constexpr typename std::remove_reference<_Tp>::type&& move(_Tp&& __t) noexcept { return static_cast<typename std::remove_reference<_Tp>::type&&>(__t); }It’s a bit ugly because the standard library functions are replete with underscores, and of course it isn’t empty. But I still think it’s quite surprising, as most people would think it actually does some sort of semantic “move”.
> It’s a bit ugly because the standard library functions are replete with underscores, and of course it isn’t empty.
I'll be honest, neither of those are my top issue with that in terms of why I think it's ugly.
The standard C++ way to free resources is the character '}'
Rust has the exact same semantics there. Drop is useful when you need to explicitly notate that a value should end its life early.
void drop(unique_ptr<T>&& x) {}
Would do the same in C++ for values held by unique_ptr: take ownership of the pointer and then free it.
No, it would not free it. It takes a reference to the pointer and does nothing.
unique_ptr disposes of the object its holding when it goes out of scope unless passed to another unique_ptr or ownership is explicitly released. Presumably klipt would std::move the unique_ptr, hence the universal reference (which seems unnecessary, just pass it by value).
Am I missing something? Does it not work with std::move?
Yes, you're missing that the type `T&&` is a reference, not a value, and so does not run the object's destructor when it goes out of scope.
Sorry, I edited right after hitting reply to clarify that I'm assuming the intention was to std::move the unique_ptr in, since it was T&& and not T&. Would that still not work?
It would still not work. All std::move does is cast to T&&, enabling overload resolution to pick a function with T&& as its parameter type.
Thanks for explaining!
It would have to be
void drop(std::unique_ptr<T> x) {}
Or, you know, just call someX.reset(); or do someX = nullptr;. The drop call would be way noisier: drop(std::move(someX));
The equivalent to drop in C++ would be
edit: fixed.template <class T> void drop(T&) = delete; // force callers to move template <class T> void drop(T&& x) { T drop_me(std::move(x)); }This is not correct; moves must leave the value in a valid state, because its destructor will still run. The correct version is actually the same as Rust's:
(Moving into a local in `drop(T&&)` also works.)template <class T> void drop(T) {}You're correct, sorry (I did have that at one point...). But the equivalent of Rust's pass-by-moving is to pass an r-value reference. Passing a const reference to drop is certainly an error and should be disallowed.
It used to, they got subtler. See https://stackoverflow.com/questions/50251487/what-are-non-le...
NLL does not change those semantics; drop still runs at the end of the scope. It counts as a "use" for the purposes of NLL and thus keeps values with destructors alive.
For some history, it was talked about changing this, but we couldn’t, due to back compatibility and it wasn’t clear it was actually a good idea. This was called “wary drop”.
Would be nice if types could annotate their Drop trait with #[early_drop] if the drop has no visible side effect, do rust could free the memory earlier.
“Early drop”, thanks autocorrect.
Tell that to the enormous heap object you just forgot you made earlier.
It would work with an object held by unique_ptr though, exactly like this Rust example.
You’re not using std::unique_ptr to manage that?
> Now this might seem like a hack, but it really is not. Most languages would either ask the programmers to explicitly call free() or implicitly call a magic runtime.deallocate() within a complex garbage collector.
The compiler actually implicitly adds drop glue to all dropped variables!
For me, rust is still love & hate, even after 1 year of half-time (most of the free time I have) hacking.
It's a wonderful language but there are still some PITAs. For example you can't initialize some const x: SomeStruct with a function call. Also, zero-cost abstraction is likely the biggest bullshit I've ever heard, there is a lot of cost and there's also a lot of waiting for compiler if you're using cargo packages.
That said, I wouldn't rather use C/C++/Go/Reason/Ocaml/? - that is probably the love part.
BTW: I've recently stopped worrying about unsafe and it got a bit better.
So my message is probably: - keep your deps shallow, don't be afraid to implement something yourself - if you get pissed off, try again later (sometimes try it the rust way, sometimes just do it in an entirely different way)
>Also, zero-cost abstraction is likely the biggest bullshit I've ever heard, there is a lot of cost and there's also a lot of waiting for compiler if you're using cargo packages.
"(...) there are two factors that make something a proper zero cost abstraction:
No global costs: A zero cost abstraction ought not to negatively impact the performance of programs that don’t use it. For example, it can’t require every program carry a heavy language runtime to benefit the only programs that use the feature.
Optimal performance: A zero cost abstractoin ought to compile to the best implementation of the solution that someone would have written with the lower level primitives. It can’t introduce additional costs that could be avoided without the abstraction."
https://boats.gitlab.io/blog/post/zero-cost-abstractions/
It's not about compile time...
> you can't initialize some const x: SomeStruct with a function call.
You can if it's a `const fn`. The set of features you can use in `const` is small but growing.
> Also, zero-cost abstraction is likely the biggest bullshit I've ever heard, there is a lot of cost and there's also a lot of waiting for compiler if you're using cargo packages.
While someone else is right that "zero-cost" refers to runtime cost rather than compilation cost, dependencies are the biggest problem.
The program `spotifyd` takes over an hour to compile on my X200 laptop. This is, for reference, the same amount of time that the Linux Kernel and GCC takes to compile (Actually, I think GCC takes less time...). Most of the compilation time is on the 300+ dependencies, that simply wrap C (and in some places, replicate) libraries that I already have installed on my system!
I've also heard that Rust compilation is slow, so I put off learning it for a while. I finally dug in deep recently and, to be honest, I don't understand the issue. Rust compilation is quite fast in my experience.
Your comment was the first I've heard of spotifyd, so I downloaded it from github and ran "cargo build". I spent 3 minutes figuring out that I needed to install the "libasound2-dev" package, then I spent less than 2 minutes compiling. I watched the clock to be sure, but after only 2 minutes, Cargo produced a target/debug/spotifyd executable. Maybe you're referring to --release compilation?
I don't have the fastest PC, it's just an 8 year old desktop with a 6 core Intel i7 980x running Ubuntu 18.04. Rust compilation maxes out all the cores, which is really nice.
With the RustEnhanced package added to Sublime Text, editing Rust code is a very interactive experience with practically instant feedback. Running tests is fast too.
I wish I understood why some people are apparently having a very different experience. Maybe it's slow on some operating systems.
> Maybe you're referring to --release compilation?
Indeed.
Ok, well, "cargo build --release" took 3 minutes and 55 seconds, which I would consider reasonably fast for compiling and optimizing 300+ libraries. I guess your laptop has only 2 cores in an older processor, which would explain the difference. Kudos to you for keeping good hardware alive.
sometimes it takes 100x more without any apparent reason - it's probably because of memory or something, what takes minute to compile on my MBA, sometimes never finishes on raspi3 (swap is off)
I currently compile on raspi4 and scp the results there to check.
An X200 is also an ancient machine. Do you value your own time so little? Clearly you are annoyed by the compile times?
You can get a $200 CPU that will blast through a kernel compile in 3 minutes.
> An X200 is also an ancient machine. Do you value your own time so little? Clearly you are annoyed by the compile times?
I value my security first and foremost. My X200 has zero binary blobs running on the machine, it also has several other properties that I appreciate.
> You can get a $200 CPU that will blast through a kernel compile in 3 minutes.
That assumes that I even have the money to afford that. Not all of us live in silicon valley.
I hear you. I'm really hoping that Swift improves over the next few years. It seems to be in a great sweet spot with many of the modern language features of Rust (optional / result types, parametric enums, try, generics, static compilation, etc). But it also has the ergonomics of a language like Go thats explicitly designed for writing practical code and just getting work done.
Swift is still missing decent async / await support, generators and promises. Some of this stuff can be written by library authors, but doing so fragments the ecosystem. Its also still harder than it should be to write & run swift code on non-mac platforms. And its also not as fast as it could be. I've heard some reports of swift programs spending 60% of their time incrementing and decrementing reference counts. Apparently optimizations are coming. I can't wait - its my favorite of the current crop of new languages. I think it strikes a nice balance between being fancy and being easy of use. But it really needs some more love before its quite ready for me to use it as a daily workhorse for http servers and games.
Another PITA is the lack of safe static variables.
Rust does have safe static variables. You just need a) to use interior mutability (`static mut` is a very strange feature that should probably never have existed) and b) to ensure that its type is `Sync` (since multiple threads can obtain references to it).
For example, use an atomic integer: https://play.rust-lang.org/?version=stable&mode=debug&editio...
You can also use types built on std::sync::Once and UnsafeCell, like once_cell::Lazy<std::sync::RwLock<T>>. This will get even easier as we get `const fn` constructors for locks in the future.
Interesting but this seems so unergonomic that I will still be virtually forced to not use static variables and thus loose their expressive power.
> Also, zero-cost abstraction is likely the biggest bullshit I've ever heard, there is a lot of cost and there's also a lot of waiting for compiler if you're using cargo packages.
Zero cost refers to runtime cost, not compilation cost. Zero cost abstraction is not bullshit.
Yes, I know, and all those crates trying to wrap unsafe C libs are really not free - I've tried.
Maybe just dig into some sources and see what macros are expanded to to see the overhead.
let x = String::from("abc");
std::mem::drop(&x);
std::mem::drop(&x);
std::mem::drop(&x);
std::mem::drop(&x);FWIW, that won't compile because std::mem::drop requires ownership of the object being passed; your code is trying to pass a reference instead.
This code compiles just fine. It doesn't do anything since immutable references are Copy, and so dropping them doesn't do anything.
https://play.rust-lang.org/?version=stable&mode=debug&editio...
Oh right, I forgot about the interaction with generics in this case. Semantically, I suppose it's creating and dropping a reference 4 times.
This should compile, &T is Copy and so it will not really do anything, but it will still compile.
On my phone so I can’t triple check, but there’s no bound on T, you can pass in any type.
can someone elaborate on this passage:
The beauty of programming language design is not building the most complex edifice like Scala or making the language unacceptably crippled like Go - but giving the programmer the ability to represent complex ideas elegantly and safely. Rust really shines in that regard.
i'm fairly ignorant on the various differences but my general feeling was that Go is quite useful?
The usual quip because Go has no Templates/Generics, so you have to sacrifice type safety all the time by casting to and from Interface{}.
One of the philosophies behind Go is to keep the language extra simple.
See "less is exponentially more".
The same way some electric bikes are restricted to a given speed to keep their user safe. Some people call it "crippled", while some other call it "simple and safe to use".
Keeping a language simple just punts complexity from the language to the implementation that uses that language. This is all the same problems that C has and Go chose to for some reason copy this poor philosophy. It's like Go designers decided most programmers are too dumb to understand complex languages.
Gotta admit, that really is a cute example. However, I was a bit surprised when the author described Go as "unacceptably crippled." What is he referring to?
Go is simple to the point where it annoys a lot of programmers, especially programmers who like to do fancy stuff with their programming language (the kind of person that's attracted to Rust, for instance).
Current top comment on /r/rust agrees with you:
> I guess I shouldn't be promoting language-bashing, but +1 for calling Go an unacceptably crippled edifice :)
https://old.reddit.com/r/rust/comments/dh4rcz/my_favorite_ru...
I’m mostly working with Rust, and I really like it, and I can even understand why WRITING Golang can be frustrating at times, but it should be clear to everyone that Golang is the best language to READ.
> it should be clear to everyone that Golang is the best language to READ
I often see Go functions with a large number of lines- many of which are noisy boilerplate error handling. Combine that with terse variable names, type inference and it's straight up tiresome to filter out the important parts. These functions aren't even doing all that much.
There is a healthy middle-ground between excessive abstraction a la Enterprise Java and what Go offers.
I completely disagree, coming from a Java and C# background.
There are nice aspects to reading Go, but any function which does some kind of error-prone operation is so hard to grok, it gets tiring - the actual purpose is constantly interrupted by 'if err := nil' so much.
And gods help you if there is some actual non-trivial error handling going on, as you've generally learned to gloss over and can easily miss that one place that actually does something with an error other than log and return.
Also, while admittedly rare, trying to review code that needs to copy structs around, if they also involve pointers or slices, is just hell.
> I completely disagree, coming from a Java
I'm sorry but I couldn't read past this.
God no. Python is the best language to read! Go is too verbose.
The usual suspects for things missing from go are the following: Generics, sum types, match statements, tuple types, compile-time data-race detection, type-safe concurrent-maps, hygienic macros, immutable types/references, functional constructs such as 'map', 'filter', or monads, marker interfaces, better error handling, type-inference for consts that isn't garbage, etc.
Less common complaints are that it's missing: object oriented features like inheritance, a configurable gc (as java has), the ability to work with OS threads, c-compatible stacks for fast c-interop, ownership semantics, type-inference for arguments (e.g. as haskell does), operator overloading, dependent types, etc.
The list of things in the first set can mostly be summed up as "go has a worse type-system than C++/rust/etc, something much closer to java 1 before generics, or c". Basically, the language is intentionally crippled because it intentionally ignores advances in type-theory that have been shown to allow expressing many things more safely.
For example, sum types and match statements make modifying code much safer. People will write switch/if-else-ladder code to do exactly the same sort of thing even without them, the code will just fail at runtime rather than compile-time when a new variant is added or one is not handled by accident.
From the first list, adding generics would essentially automatically also give you type-safe concurrent maps and map/filter/zip/etc.
Sum types and match would be a clear win for Golang imo, the rest I’m not so sure.
Go code feels very low-level to me, and very boilerplatey. In Rust I can write code that in almost the same way I write JavaScript (but with added type annotations), but Go makes me deal with all the little details, and makes it hard to abstract things neatly.
Lack of generics is a big part of the issue. But more generally the focus on "simple" code means that more sophisticated abstractions are actively eschewed, and personally I find this makes writing Go code quite frustrating.
Your comment is weird. The one making you deal with the little details should be rust, e.g go is garbage collected. Secondly, javascript + type annotations is typescript.
Disclaimer: I’ve never tried to write Go.
Memory management is not the only type of “little detail.” For instance, rust provides common collection operations (filer, map, find…) in the standard library. In go (AFIAK), you need to hand-write a loop for each. IMO, the rust version is takes much less mental bandwidth to write and understand.
rust provides common collection operations (filer, map, find…) Those are just standard functional methods available in most languages. Go indeed is noteworthy for it's lack of features.
Yes, of course. I just described it as a Rust feature because that was the comparison being made in the thread.
Interesting how developer views can differ.
Someone describes Go as "unacceptably crippled" while Uber engineering has 1500 microservices written in Go, making it their primary language.
Businesses often prefer poor languages with lots of cheap programmers readily available to the opposite, because this lets them easily trade off quality against price. Want it done cheap and fast? Get a freshman intern to write it in PHP or JavaScript; if you later want quality, you can then hire some more expensive seniors to fix it up. If you pick Haskell instead, it may be more reliable upfront, but you also have to pay the full price upfront.
Go was explicitly designed to be a poor language of this sort, as evidenced by the infamous Rob Pike quote:
> The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.
In other words: a language that makes programmers fungible. And how did they accomplish this? By omitting features and barely going for the lowest-common denominator of the languages listed, disregarding any reasons other languages may have for their more ‘advanced’ features like generics. Hence, crippled.
(Frankly, I object more to describing Scala as ‘the most complex edifice’.)
I'm a big fan of scala and really enjoy the language. That being said, the scala community does sometimes advocate an approach that's pretty over-engineered.
These are orthogonal points. Linux Kernel is all in C and possibly drives the world. Doesn't mean it's safe and amazing.
C was the only sensible choice back then.
Uber on the other hand has many languages available yet picked Go. So they certainly don't think it is "unacceptably crippled".
There is an implicit "for my use-case" following "unacceptably crippled". So, go is not "unacceptably crippled" for the use case of creating a ton of small, simple, easily maintainable services at Uber, but it might be "unacceptably crippled" for the use case of having fun with type systems, or whatever the author was interested in at the time they were considering go as an option.
The paragraph starts with "The beauty of programming language design".
Even the most charitable interpretation would conclude it's speaking in broad terms and not just "for my use case".
Full paragraph:
> The beauty of programming language design is not building the most complex edifice like Scala or making the language unacceptably crippled like Go - but giving the programmer the ability to represent complex ideas elegantly and safely. Rust really shines in that regard.
Many teams just start with what they know and then that sets the tone. It no way means that the language is better. Amazon uses Java. Does that mean it's better than Go ? Popularity paints an incomplete picture at best.
Except that Uber started with JavaScript and Python.
Switching to Go was an educated decision. Not a continuation of what was there.
https://www.quora.com/What-is-the-technology-stack-behind-Ub...
I can speak as someone who is building a microservice application in Go and was actually part of the decision to use Go, rather than Java or C# which we traditionally used.
I also find Go-the-language unacceptably crippled. Unfortunately, Go-the-runtime is the only best runtime for microservice-based applications, if you want a GC, especially if you want to run the system on limited hardware.
All the other options are either too bloated to comfortably spawn in large numbers (Java, C#, Node, Python, Ruby take too long to start, use too much RAM , and/or give you too large containers), or are too new and unproven (Nim).
So, we chose to use the inferior language and line with a small hit to productivity (language choice has little impact on productivity after the learning stage anyway) for the superior runtime system.
Popularity ≠ Quality.
Lack of a decent type system?
Probably lack of generics?
I wonder if the author would have a more positive impression of Go 2, which is planned to ship with generics...
Slightly less crippled. One can keep the performance and feature comparisons going for a while.
Go shouldn't be Rust. We already have Rust. Though while I say that, I'd still hate to have to use Go.
> or making the language unacceptably crippled like Go
Gotta say, I lost a lot of respect for the author at this point. It’s not like I don’t love Rust - quite the contrary - but if the only takeaway from Go for you is that it is “unacceptably crippled” then I feel you have missed a lot of insight. Go has been one of my languages of choice for over half a decade now, and for good reason.
> > or making the language unacceptably crippled like Go
> ... if the only takeaway from Go for you is that it is “unacceptably crippled” then I feel you have missed a lot of insight.
Perhaps the author used a poor choice of words and instead could have phrased their intent along the lines of:
Go lacks the semantic density needed to express solutions in both a concise and consistent manner.
Were this the case, it would be hard to disagree as Go does, indeed, lack linguistic capabilities present in other programming languages which enable developers to encode system constraints within the language itself. Some might see this as a benefit, but I do not. YMMV.
A similar philosophy of "keep the language dirt-simple so anyone can code in it" was a driving force behind Java (the language) and JavaScript. What people have discovered is that when a programming language does not assist in expressing intrinsic problem complexity explicitly, it becomes implicitly intertwined within the source itself.
I think it's a syntax problem. ASCII doesn't have enough bracket characters to simply & clearly represent necessary language features. So it's harder for an intelligent human to sort and categorize these aspects.
Pre-generics Java is about the appropriate amount of language complexity for our current lingua franca
> Pre-generics Java is about the appropriate amount of language complexity for our current lingua franca
I am not sure to whom you are referencing as being "our", so I can only say I disagree as there are many developers which are quite adept in languages having more complexity than "pre-generics Java."
Perhaps you meant to reply to a different comment?
It surely isn't mine, otherwise I wouldn't be happily using Java 13 right now.
Also even Google does have more stuff going on Java, C++, Dart, Kotlin and TypeScript than Go, which has more uptake outside Googleplex.
Java is tolerably mediocre now but vintage 1996 Java was awful (and slow). I enjoy Scala, but even I can see why it's more than most of the industry is ready for. If there's a sweet spot, Kotlin seems close to it.
Err, we had 10x times the features of "Pre-generics Java" even in 1970s Lisp and Smalltalk...
I will gladly trade a little bit more language complexity if it means the complexity of the code that I write using that language is simpler.
This. When a platform refuses to solve a problem, that problem doesn't go away. The users are forced to roll their own solutions, none of which have the same test coverage and optimization opportunities.
Generics don't make Java that much more complex for your workaday Java coding.
Speak for yourself.
> Go lacks the semantic density needed to express solutions in both a concise and consistent manner.
That is a difficult statement for a Go user to parse.
It is referring to a lack of language features that result in verbosity or boilerplate?
There's a lack of language features that result in verbosity, boilerplate, less performance and/or less type safety.
It may be an unnecessary dig, but the author may indeed be familiar with Go and still think it's unacceptably crippled for all their use cases. The whole post is just their opinion.
In my anecdotal experience, the type of programmers that evangelize and talk shit about programming languages tend to be on the less informed side of the knowledge spectrum.
I occasionally program Go because it has an amazing amount of libraries - I basically use it for random microservice type things that glue systems together. (Most recently, an HTTP login endpoint that sends and receives XMPP messages to authenticate the user.)
I don't like it. I program Rust the rest of the time. I'm considering learning Perl5 so that at least my type system lets me do vaguely expressive things - I'm utterly fed up of half my code looking like:
I got very used to Rust's try!() macro, now ? operator, and being able to map over Results to convert between error types in a single line that feels much less noisy - I have functions in Go that are a dozen or so lines that would've been 3 in Rust, and tbh I struggle to follow what the function actually does when 3 out of 4 lines are to do with the failure case.foo, err := something() if err != nil { return nil, someErr{err} }I would consider "unacceptably crippled" to be a reasonable description of Go, even if it's the best tool I have for some jobs.
"unacceptably crippled" is a pejorative meant to insult, not a descriptive criticism.
> In my anecdotal experience, the type of programmers that evangelize and talk shit about programming languages tend to be on the less informed side of the knowledge spectrum.
Or maybe they have just expanded their "knowledge spectrum" and are experiencing an effusive moment?
Talking shit isn't effusive. It's possible to enjoy a programming language without openly denigrating others you regard as inferior.
It's apparent you have dealt with negative situations such as you describe. Here is a haiku which may be helpful:
HTHThose who can will do, Those who cannot will seek to, Hide that fact from you.> Talking shit isn't effusive. It's possible to enjoy a programming language without openly denigrating others you regard as inferior.
Both points you make I agree with. I simply was offering a possible explanation for those whom exhibit overly passionate opinions when they become enlightened.
When people take an epiphany to the point of denigrating those around them, then they likely have issues beyond what any programming philosophy could possibly encode.
Sure, and my opinion is that this is a very shallow takeaway. My whole comment is just my opinion.
> if the only takeaway from Go for you is that it is “unacceptably crippled” then I feel you have missed a lot of insight.
Um, the whole point of Go was to be "opinionated and acceptably crippled". That's why the creators built it the way they did. Go has more than a few things where "Me, but not for thee" rules--the compiler can do X but you cannot. If you find that is an acceptable tradeoff for the other features of Go, great!
Lots of modern languages, however, are trying to go the other way. They are trying to give the programmer every bit of power that the library/compiler programmers have. This has its own failure modes that you may find unacceptable. That's fine too.
I personally dislike the Rust vs Go discussions. Those two languages in particular really don't have domain overlap and are like comparing apples to screws.
Minor nit: would've been better if you'd put 'screws and apples'. 'Rusty screws' and 'Go apples!' sound better then when interchanged :P
It is crippled admittedly by the authors themselves. The error checking pattern, no generics etc. are all to keep it "simple" yet are things most other programming languages have. Have you used Rust ? The borrow checker makes a lot of sense.
I feel like I shouldn’t have to answer this question, but Yes, of course I’ve used Rust. Why do you question this? I said nothing about Rust at all, other than that I like it, and whether Rust is good or not has little to do with Go being “crippled.”
But of course, Rust is not flawless. In fact up until recently it was kind of annoying, before non-lexical lifetimes became a part of the language. It’s also a very large and complex language compared to Go. This is not unilaterally a bad thing, but just as every line of code comes at a cost, so to does every language feature, and if enough language features fail to pay the rent your programming language will end up feeling bloated.
What Go lacks is exactly its strengths. If you criticize C for not having Java-style exceptions, people will look at you funny. There is some divide in the community over error handling but I defend that Go’s verbose and stupid error handling pattern is my favorite part of the language, and changed how I code inside and outside of Go. I also appreciate Go’s simple but effective patterns for composition-based object oriented programming, and for having a decent concurrency model (not that it is without flaws: the limitation of Go’s memory safety is definitely an issue here.)
Everything comes at a cost. The borrow checker brings immense promise for security, but that does not mean other approaches to memory safety or language design are suddenly obsolete. It’s more complicated than that, and I consider failure to understand this to be a sign that someone is not keeping an open mind.
>>What Go lacks is exactly its strengths.
Surely you understand that this is a very self-serving attitude, though?
It's like someone points out that your car is crippled because it has a turning radius of 2 degrees and also has no brakes, and you go, "no but you see, what this car lacks is exactly its strengths! And it's those strengths that I enjoy the most when driving this car!"
“Crippled” has a negative connotation. I’d say it is not crippled but kept self-contained without a multitude of overlapping features.
I work with Rust only these days, it’s really an awesome language and I wish everyone working with system languages would switch to Rust. Yet, I find Golang to be a much clearer language to read (and I read a shit ton of code). I hope they don’t add generics, but I wish they would options, results, sum types in general, redeclaring variables, the ? Operator, etc.
Interesting, I've been reading a lot of Golang of late as we move into k8s world, and I find it hard to read, mainly in terms of package layout - types and their related functions are often in very different places.
But it's certainly less semantically dense compared to Rust or Scala etc.
I doubt that any genetic syntax will significantly impact go readability. The only syntax overhead will be in type definitions, not in type usage, and the later is present a lot more than the former. Couple that with the current need to add extra type assertion on type usage, and you may be looking at overall readability improvement
How would Option<T>, Result<T, E>, and sum types work without generics?
Native support is the answer.
Empty interface perhaps.
Crippled is a good way of describing "use a map[T]struct{} when you need a set of T" in my mind.
Regardless of the merit of his criticism, it clearly derails the whole discussion about this blog post, which was just about a cool feature of Rust.
Go is a fine GC language - in many ways, it's a lot better than the likes of Java! But it's nonetheless way too clunky for many use cases, which is what the OP may have meant by their remark. And the concurrency support comes with a lot of nasty pitfalls, especially compared to Rust or even Pony.
Rust was born as a critique towards C and C++ and putting down other languages is part of the community DNA.
Go is a favorite target, but Python, Ada, Java, C# etc didn't remain unscathed either.
> unacceptably crippled like Go
I just spit mezcal on a stranger.
Were they okay with it once you explained what was so funny?
It was a friend of a friend. The mezcal coming out of my nose was humorous to them.