Turning off Rust's borrow checker completely (2022)
iter.caI wonder if you could actually write a compiler that does this for compilation speed. Have one mode where the compiler is super fast, has no error checking and can compile some malformed programs, and another mode where the compiler does all the checks.
This would be useful for dependencies for example, in most circumstances you can safely assume that dependency code doesn't contain compilation errors, so any passes that check for them are just pure and unnecessary overhead.
I don't know how practical this is, I don't have much experience in compiler design (beyond very small and toy compilers).
> This would be useful for dependencies for example, in most circumstances you can safely assume that dependency code doesn't contain compilation errors, so any passes that check for them are just pure and unnecessary overhead.
It's not an identical case, but TypeScript offers this with `skipLibCheck`. Most people use it. It's generally good--until it's really not good and you eat half a day unwinding a deceptively broken dependency.
Sounds like if the code is in JavaScript, it would be already broken anyway
Yes... Rust!
I don't know if you can toggle it from the front-end, but the Polonius borrow checker includes a "location insensitive" mode which is faster but accepts fewer programs.
I think the GP had something else in mind. You're mentioning a mode where the rules are more stringent, allowing for a faster check and consequently faster compile times. The GP is pictuting a mode where the checks are skipped altogether, only the necessary transformations occur with an assumption that everything is already correct. I'm weary of having something like that outside of nightly, unless cargo publish had more checks than it does today.
It's been a while since I've programmed in Rust, and I was initially surprised that the get() call returned garbage. Could anyone explain why that is? I mostly program in C++ and C#.
After thinking about it, my intuition is that:
- the Vec was passed by value to the function, which in Rust means something different than passing a std::vector in c++ to a function, due to...
- Due to the language semantics, its reference counter remained 1 while inside the function, but decreased to 0 when returning to main, since in Rust if it isn't a borrow (ref) then it's a "steal" - the callee "steals" the Vec completely from the caller. Or something like that.
- Since the reference counter is 0 when returning to main, the Vec is released, akin to calling its destructor if this was C++.
- SOMETHING changed the value of either the internal pointer to the heap inside the stack-allocated Vec (so now it points to garbage), OR something overwrote the Vec's content with some garbage. I'm not sure what that something is, and why would it do that.
I apologize if my explanation/intuition above is garbage in itself, like I said it's been a while since I programmed in Rust.
Edit: formatting
You're mostly correct, it is moved into the function (pass by value), and then, at the end of the function's scope, the destructor is called automatically (as the compiler inserts a call to `drop()` at the end of the function) which de-allocates the vec, causing the reference obtained in `main()` become a dangling pointer.
Thank you!
An alternative, more correct but less interesting, explanation: you really need to look at the disassembly (with your specific build of the Rust compiler with the same flags) to see what's really happening. This is why UB (undefined behavior) is dangerous.
For example, the compiler could decide that the entire allocation is unnecessary and elide it, if it's known that the Vec is very small. The compiler could also decide to pass it by ref. Remember that the compiler merely has to preserve your intent, beyond that it is allowed to do whatever it pleases.
I have limited experience with Rust and the combination of slow compile time and borrow checker is irritating. My rust code was less than a thousand lines and it takes like 10-20 seconds to compile.
Rust really made me appreciate garbage collection after the number of times I resorted to things like Arc<Box<..>>.
I highly recommend checking out makepad [1] - they have +100k of rust code and the compile time is around 10-15 seconds on commodity hardware.
However they are obsessed about performance. They reason for such speedy compile times is that makepad almost has no dependencies.
IMHO Makepad also has an exceptionally readable Rust code style, which is a very rare thing (maybe that simple Rust style even contributes to the good build times, dunno) - but also: from the pov of a C programmer, 200 kloc in 10..15 seconds is still quite bad ;)
How quickly can you verify the memory safety of that C program in addition to compiling it though? When comparing like to like, C no longer looks so fast.
Rust code with unsafe still compiles slowly. And safe subsets of C still compile quickly.
That is exactly the opposite of my experience.
I used actix web, could it be the reason for slow compile?
I am not completely sure, but I think pretty much all frameworks in Rust have complex dependencies and codebase not optimized for fast compile times. You could check out Axum [1], at the first glance seems similar to actix [2]. Both use tokio which is by itself pretty big I think.
Folks at makepad poured enormous effort in keeping dependency graph minimal.
> after the number of times I resorted to things like Arc<Box<..>>.
... What are you doing ? It's definitively unusual.
> ... What are you doing ? It's definitively unusual.
No it's not, it's extremely common. `Arc<Box<...>>` or `Arc<Mutex<Box<...>>>` or similar is used all the time when you want to share a mutable reference (on the heap, in the case of Box); especially in the case of interior mutability[1]. It's pretty annoying, but I have learned to love the borrow checker (although lifetime rules still confuse me). It really does make my code extremely clear, knowing exactly what parts of every struct is shareable (Arc) & mutable (Mutex).
[1] https://doc.rust-lang.org/book/ch15-05-interior-mutability.h...
> No it's not, it's extremely common.
Arc-Mutex yes, Arc-Box no.
It's like "I want a shared, immutable reference to something recursive, or unsized". The heap part makes no sense because it's already allocated there if you use an Arc : https://doc.rust-lang.org/std/sync/struct.Arc.html
> The type Arc<T> provides shared ownership of a value of type T, allocated in the heap.
Moreover, you don't even need the box for a dyn : https://gist.github.com/rust-play/f19567f8ad4cc00e3ef17ae6b3...
> Arc-Mutex yes, Arc-Box no.
Yeah, looks like Arc-Box is kind of pointless, but isn't there some thread locality reason why people wrap `Box` in `Arc`? I remember reading something about it a while ago but maybe I'm misremembering.
To play devil's advocate:
`Arc<T>` places the refcounts immediately before `T` in memory. If you are desperate to have `T` and `T`'s refcounts be on different cachelines to reduce false sharing in some contrived scenario, `Arc<Box<T>>` would technically accomplish this. I think a more realistic optimization would be to pad the start of `T` however.
`Box<Box<T>>` and `Arc<Box<T>>` are thin pointers (size ≈ size_of::<usize>()) even when `Box<T>` is a fat pointer (size ≈ 2*size_of::<usize>() because `T` is `str`, `[u8]`, `dyn SomeTrait`, etc.). While various "thin" crates are typically saner alternatives for FFI or memory density (prefixing lengths/vtables in the pointed-at data instead of incurring double-indirection and double-allocation), these double boxes are a quick and dirty way of accomplishing guaranteed thin pointers with only `std` / `alloc`.
I would not call either of these use cases "extremely common" however.
I don't exactly remember the exact thing I used that's why I said "I resorted to things like.. ". Also, I used actix web, which could be the reason of slow compiling code.
At that point a regular GC is probably faster (at least from my experience of doing memory management in C++ with ref-counted smart pointers, which has a 'death-by-a-thousand-cuts' performance profile, e.g. the refcounting overhead is smeared over the whole code base and doesn't show up as obvious hotspots in the profiler).
Compiling clippy certainty doesn’t take that long and it’s a substantially larger project. What you describe is pretty weird to be honest.
I found the source code of the macro cited[1][2] way more interesting than the article. It's not that big of a deal to find where compilation error counts are incremented in the compiler and just, you know, not increment them. The macro is pretty cool though (turning bounded into unbounded lifetimes).
[1] https://docs.rs/you-can-build-macros/0.0.14/src/you_can_buil...
[2] https://docs.rs/you-can/latest/src/you_can/lib.rs.html#17-25