Settings

Theme

Rust fact vs. fiction: 5 Insights from Google's Rust journey in 2022

opensource.googleblog.com

250 points by rhaen 3 years ago · 232 comments

Reader

estebank 3 years ago

Really happy to see these results on the perception of googlers on quality of the Rust compiler errors, an area I'm highly passionate about. I'd like to take this as an opportunity to encourage the 9% (and the other 91% as well) to file tickets when we don't meet the bar we've set for ourselves.

  • nmlt 3 years ago

    I haven’t tried debugging with lldb in some time so I don’t know whether it has improved significantly, but couldn’t the 9% also include that?

    • estebank 3 years ago

      It could be the case. There are multiple separate efforts to improve the debugging experience in production (improving monitoring, cheap profiling, improving logging, encoding more info in the DWARF output), but all of those will take some time to reach the same level of quality that, for example, Java has today.

  • larsberg 3 years ago

    Thanks for all of your work (in addition to that of others) - it really shows! We do encourage people to create minimal repro cases and open bugs whenever possible. And, even more ideally, to consider contributing a PR upstream...

  • WaffleIronMaker 3 years ago

    I'm a junior developer learning Rust, and I just want to say thank you for all of the work you've put into making the errors high quality. It makes learning the language such a better experience.

softirq 3 years ago

> Low-level Operating Systems Sr. User Experience Researcher

Wow, I didn't even know this job existed. IMO Rust as a C++ replacement is fine, Rust as a C replacement has more trade-offs than I still care to make. C is still far simpler (you can still read K&R in one day and keep most of the language in your head), has faster compile times, and the pain points cough macros are still often pain points in Rust.

I think the biggest thing is that systems programming still requires a language that gets out of the way so you can focus on very technical problem domains where what the hardware is actually doing really matters. Rust is a language designed to get in your way and force you to create type abstractions. Adding too many abstraction can be exceedingly dangerous in an environment where not having a full view of how memory and hardware registers are laid out leads to even worse errors than just buffer overflows. IMO Rust makes this type of programmer more difficult just as C++ does.

  • jenadine 3 years ago

    The "simplicity" of C is not a good thing. The Brainfuck language is even "simpler" and you can read the spec in 2 minutes. But that does not make it easier, because all the complexity is in the usage.

    The abstraction layer that one can build with rust allow the programmer to actually focus of the actual business logic instead of trying to get low level details right.

    • softirq 3 years ago

      I think you missed the part about systems programming. When you are programming real hardware, the focus is entirely on low level details. You need to know the commands being sent to the device, the device state, and the ownership of resources by the device (which Rust doesn't solve for) are correct.

      The innovation of Rust is the borrow checker, which is primarily of interest to systems programmers. If your primary interest is highly abstracted business logic, there are tools that don't require manual memory management or being pedantic about the different types of strings. You could just use Go, Java, Haskell, Python, etc.

      • pkolaczk 3 years ago

        > there are tools that don't require manual memory management or being pedantic about the different types of strings. You could just use Go, Java, Haskell, Python, etc.

        1. Rust doesn't force you to do manual memory management. Rust memory management is automatic by default and only if you really, really want to, you can do it manually.

        2. Memory is not the only resource. The GCs in languages you listed only solve the memory management problem, but regarding the other types of resources, their ergonomics are often worse than C - you have to remember to close the resources manually and you get virtually no help from the compiler.

        3. None of the listed languages address problems related to concurrency, e.g. data races. Ok, Haskell kinda avoids the problem by imposing other restrictions - by not allowing mutation / side effects ;)

        4. Rust offers way better tools for building high level abstractions than Go, Python and Java. It has set a very high bar with algebraic data types, pattern matching, traits/generics and macros.

        • kaba0 3 years ago

          1. Rust does manual memory management. It has some syntactic sugar for it in the form of a compiler-enforced RAII, but that is still manual memory management for all practical purposes. A good distinction to make is whether low-level, memory/ownership details leak into public APIs. This is trivially true for Rust, while is not true of managed languages.

          3. Rust only addresses problems related to data races, not as an example. All the other race conditions are still on the table. It is a good thing to have, but I think they are the least problematic, and easiest to solve part of concurrency issues.

          4. All of these have been known for like 3 decades. There are plenty of managed languages with these, ML, OCaml, Haskell, Scala. But I think your claim is subjective at best.

          • pkolaczk 3 years ago

            > It has some syntactic sugar for it in the form of a compiler-enforced RAII, but that is still manual memory management for all practical purposes.

            Then you've got a different definition of "manual" than mine. Manual means that developer has to insert calls to allocate / deallocate memory and that the developer is responsible for proving the correctness of those calls. Automated means those calls are done by the runtime or by the compiler automatically, and the compiler makes sure they are correct. In case of Rust, those calls are inserted automatically by the compiler.

            > memory/ownership details leak into public APIs

            The fact that ownership is a part of public API is a good thing, similarly how it is a good thing to specify an argument is an integer and not a string.

            > There are plenty of managed languages with these, ML, OCaml, Haskell, Scala.

            I referred to the ones mentioned in the above comment, which mentuoned Java/Go/Python. Haskell/Scala/Ocaml/ML are quite niche even compared to Rust these days.

            But even though Haskell / Scala might get close on some type-system features, they don't offer similar experience as Rust in other areas. Haskell is more restrictive in terms of managing state than borrow-checker, and Scala tooling / compile times has been always horrible.

            > Rust only addresses problems related to data races, not as an example. All the other race conditions are still on the table.

            This is like saying a statically typed language doesn't stop you from putting a string telephone number into a string surname field. Sure it doesn't. But despite that, the value of static types is hard to overestimate.

            In practice, the borrow checking + RAII + Send/Sync rules can be used to make the other types of concurrency problems very unlikely by properly modeling the APIs. Sure, no language can protect from all concurrency problems in general, but at least Rust gives you some good tools. For instance it is trivial to forbid concurrent access to something that shouldn't be accessed concurrently and let the compiler enforce that. Now try enforcing that in your "business oriented language of choice". In my experience the majority of concurrency related problems in real large-scale software development happen when some code not designed to handle concurrency accidentally becomes executed concurrently because developers don't realize something is shared and mutated at the same time. Another type of common issue is with communicating concurrent threads of execution, when one sends a message but the receiver is not there on the other end because of premature exit e.g. due to error, leading to a deadlock. Rust protects from those really well.

            • kaba0 3 years ago

              > The fact that ownership is a part of public API is a good thing

              A libraries next version which switches up some internal representations memory handling should ideally not mess up your application, but it also mandates a higher refactor rate when you are only working within your application’s boundaries. These are worthwhile tradeoffs for the niche rust is targeting, but not for every use case.

              I’m not saying Rust is a bad language, I really like using it for its intended niche of complex applications where absolute control is needed, like a browser engine. But it is not a panacea and I would definitely not choose it for a CRUD webapp.

              • pkolaczk 3 years ago

                > A libraries next version which switches up some internal representations memory handling should ideally not mess up your application

                It doesn't have to because internal memory representation can and should be abstracted out, and Rust gives a plethora of tools to do that.

                Your argument works against against static typing in general. The next version changes the address representation from String to Address (in managed language) and messes up your app. That's the same thing.

                • foldr 3 years ago

                  If version 1 of your library has a function that returns a reference to T, then that constrains your choices of implementation for the underlying data structures containing Ts more than they would be constrained in a garbage-collected language. So unless there is to be a ban on functions returning simple references, Rust is always going to be a little less flexible in this respect. That is fine and expected given the overall design goals of the language, but there's no point pretending that it's not the case.

                  The simplest concrete example, I guess, would be a function that returns &'str. There are plenty of Rust APIs out that have functions with this type signature.

                  • pkolaczk 3 years ago

                    > If version 1 of your library has a function that returns a reference to T, then that constrains your choices of implementation for the underlying data structures containing Ts more than they would be constrained in a garbage-collected language

                    This is IMHO a good thing. If it returns a reference to T, it means it still owns it and allows only temporary usage of it. This is semantics of ownership and does not have anything to do with memory management. If you wanted to allow sharing for unspecified lifetime, sure, you can. There is Rc/Arc.

                    I've fixed plenty of bugs in code written in managed languages, where a reference to T was handed out from a library (because there is no other choice - everything is a reference) and then someone stored it for longer than it was valid, leading to a logical equivalent of use-after-free.

                    E.g. get an entity object managed by Hibernate. Pass it up outside of the context of Hibernate session. It will likely blow up because the object references a session that's now closed. Rust ownership model would prevent exactly that problem.

                    I find this "flexibility" of managed languages actually a problem in large codebases, similarly how flexibility of goto is universally considered bad. It severely hinders maintainablity. It allows to pass references freely and create implicit, complex, often cyclic, reference graphs which are very hard to reason about.

                    In my Rust code 99% of objects don't need shared ownership. But managed languages make shared ownership the default, optimizing for the edge-case.

                    BTW, your statement can be rephrased to: "If version 1 of your library has a function that accepts a reference to T, then that constrains your choices of values of T more than it would be constrained in a dynamically typed language."

                    You may say that you can use Any / Variant / Object in a statically typed language to overcome that limitation. True, and similarly you can use Rc/Arc/Copy types in Rust.

                    This is all the same thing. It just takes static typing to the next level. Not only it allows to express constraints on values, but it also allows to express constraints on time they can be used.

                    • foldr 3 years ago

                      > This is a good thing.

                      This is dogmatic. In can be a good thing sometimes, in some domains. In other instances it can just be a pain.

                      Let's say I'm using a 'names' library that provides the following utility function:

                          first_name<'a>(name: &'a str) -> &'a str
                      
                      My code happily uses this function (which we can assume is not performance critical). At some point, the author of the library notices that there are cultures where the nearest analogue of a 'first name' is not always a contiguous substring of the full name. To fix this bug they must change the function's type. They may choose e.g.:

                          first_name<'a>(name: &'a str) -> Cow<'a, str>
                      
                      I'd like to update my version of the library to get the bug fix, but can't do this for free. I have to update the calling code, and possibly even change some of my own internal data representations. This contrasts with pretty much any GCed language. For example, in Go the type of the function would just be

                          func firstName(name string) string
                      
                      and the bug fix would require no change in the API.

                      Now let's relate this example back to what you originally said in response to kaba0:

                      >> [kaba0:] A libraries next version which switches up some internal representations memory handling should ideally not mess up your application

                      > It doesn't have to because internal memory representation can and should be abstracted out, and Rust gives a plethora of tools to do that.

                      The above is a simple example of why this is not true. Any time you write a function that returns a reference, you are limiting the changes you can make to internal representations (both in the library itself and the calling code) without making a breaking API change.

                      Please don't respond by saying "this API was badly designed in the first place!" Most languages don't give you the opportunity to design APIs badly in this particular way. If all APIs were perfectly designed on day one then of course we'd never have to worry about API changes.

                      Again, none of this is to bash Rust. I just think it is important to be realistic about the downsides as well as the upsides of Rust's ownership system.

                      • pkolaczk 3 years ago

                        By writing:

                            first_name<'a>(name: &'a str) -> &'a str
                        
                        you promised the caller that the output is built directly from the input slice. That is your choice. I'll show you later that you didn't have to.

                        Similarly by writing this in Go:

                            func firstName(name string) string
                        
                        you promised the function accepts a string.

                        Then you want to change the semantics (the contract) of the function, break your promise and you complain you have to change the signature. You can't do that in any statically typed language.

                        In Go's case, if you suddently wanted to change the memory representation of name to something else than string, e.g. to a name struct:

                            func firstName(first_name name) name
                        
                        then obviously you have to change the signature. This is the same problem.

                        If you don't want your caller to be affected by lifetimes, just don't specify them in the signature:

                            fn first_name(name: String) -> String
                        
                        this is perfectly fine in Rust.

                        You may say it might be slow because it forces a copy, and forces a particular string implementation. So as I said, Rust gives you tools to abstract out the implementation details:

                            fn first_name(name: impl AsRef<str>) -> impl AsRef<str> {
                               // all are correct:
                               // return name;
                               // return "foo";
                               // return String::from("foo");
                            }
                        
                        The flexibility goes actually much further than just relaxing the lifetimes. With this signature I actually can change the name representation from String to any type that can be exposed as a slice, with no additional runtime penalty like virtual calls, which you'd need otherwise in Go/Java.

                        > I just think it is important to be realistic about the downsides as well as the upsides of Rust's ownership system.

                        So the downside is that it offers you more choices and allows to express more constraints in the signatures, and you can choose wrong. I guess that's quite ok in a general purpose language that wants to be applicable to many different niches.

                        • foldr 3 years ago

                          As I said in the previous comment:

                          >Please don't respond by saying "this API was badly designed in the first place!" Most languages don't give you the opportunity to design APIs badly in this particular way. If all APIs were perfectly designed on day one then of course we'd never have to worry about API changes.

                          I did edit in that paragraph ~5 minutes after submitting the comment, so apologies if you missed that.

                          >fn first_name(name: impl AsRef<str>) -> impl AsRef<str> {

                          As I said previously, "...unless there is to be a ban on functions returning simple references..." If you're willing to eat all that extra generic ceremony, then yes, you can make Rust APIs that are flexible w.r.t. ownership. However, you can't control the code style of the libraries that you're using.

                          > With this signature I actually can change the name representation from String to any type that can be exposed as a slice, with no additional runtime penalty like virtual calls, which you'd need otherwise in Go/Java.

                          Off topic, but you could do this in both Java and Go via generics.

                          • pkolaczk 3 years ago

                            > Please don't respond by saying "this API was badly designed in the first place!" Most languages don't give you the opportunity to design APIs badly in this particular way. If all APIs were perfectly designed on day one then of course we'd never have to worry about API changes.

                            Ok, I agree. But this is just as well a problem for any language that specifies API in some way. So it is really a decades old debate about static typing disallowing to change things easily. I mean, there might always be certain case when the library author has to change the signature because they made it too restrictive. It just happens that Rust allows you to constrain things that might not be constrainable in other languages, so I guess the chances for that happening accidentally are somewhat higher. But generally I like going from more concrete / restricted to more generic when needed rather than the other way round.

                            > Off topic, but you could do this in both Java and Go via generics.

                            I don't think so you can achieve same level of flexibility and efficiency at the same time.

                            Go doesn't have function level generics, so you'd need interfaces and runtime polymorphism as usual (and runtime penalty).

                            And in Java you cannot add a new interface implementation to a builtin type like String, so the idea of switching from String to any other type won't work. There is much more upfront ceremony needed to add such flexibility (defining custom interface + implementations and using them from the beginning).

                            • foldr 3 years ago

                              To my mind, the issue is that a substantial portion of lifetime constraints reflect implementation details and not actual domain-level constraints that the programmer specifically wished to enforce. The same thing can of course happen with static type constraints generally, but I think not usually to the same extent.

                              >Go doesn't have function level generics

                              Not sure what you mean by this. Go functions can take generic parameters that satisfy a given interface (at compile time). E.g.

                                  func firstName[T AsBytes](name T) T
                              
                              (You'd have to define the AsBytes interface yourself.)

                              >in Java you cannot add a new interface implementation to a builtin type

                              Yes ok, fair point.

                              • pkolaczk 3 years ago

                                         func firstName[T AsBytes](name T) T
                                
                                Ok, I stand corrected. Can you define AsBytes for string then?
                                • foldr 3 years ago

                                  > Can you define AsBytes for string then?

                                  Yes. However, because strings in Go are immutable and byte arrays are necessarily mutable, you can’t safely convert strings to byte arrays (or vice versa) without copying, so it would not necessarily be a particularly useful interface.

                                  By the way, this was an interesting discussion. I do see your overall point more than I did at the beginning of it, although I don’t think we completely see eye to eye on the costs/benefits of lifetimes.

                • kaba0 3 years ago

                  > Your argument works against against static typing in general

                  I don’t think this argument works. Ad absurdum a very strong type system would even specify the implementation itself, making any change breaking — is that good? No, it isn’t as the useful property of the type system is no longer there. I don’t agree that this usefulness line is behind “ownership annotations” — that’s what you would have to convince me of.

      • davidhyde 3 years ago

        > “and the ownership of resources by the device (which Rust doesn't solve for) are correct.”

        For embedded software, the underlying code basically reads and writes a bunch of registers. Unsafe memory access with side effects. The benefit of using rust here is that you can easily model these access patterns to make an api that cannot be abused. So the driver reads and writes addresses and the user code operates through the driver with all the benefits of ownership at hand to avoid race conditions and other foot-guns.

        So, in this way rust does indeed “solve for” ownership of devices. You can’t have two threads (or interrupt handlers) mutating the same device without satisfying the ownership rules.

        • foldr 3 years ago

          >You can’t have two threads (or interrupt handlers) mutating the same device without satisfying the ownership rules.

          The trouble with this is that abstract 'devices' don't necessarily map neatly to the underlying hardware. Configuring peripherals on a typical microcontroller typically requires setting flags in a bunch of random registers which don't necessarily have neatly separated responsibilities.

          Take PWM as an example. Is there a PWM 'device'? Is there a PWM setting for each port, according to some abstract representation of ports? What about the timer used to generate the PWM output? Does the PWM device own the timer, or does the timer own the PWM device? Any such abstractions cause more problems than they solve. You really just need to think carefully about how you are manipulating the underlying hardware.

          • bombela 3 years ago

            In my experience a decent way to solve this is by two layers of abstractions. I will take any better design ideas!

            First layer gives you safe access to the hardware registers. For example, ensure atomic/synchronized access, forbid invalid/reserved values. Name the flags/bits to reduce human mistakes (reg |= Prescaler::Div8.

            You can still miss-configure the PWM/Timer settings of course.

            The second layer gives you a safe driver interface. Giving you all the options to configure a timer for a some PWM settings for example.

    • WiSaGaN 3 years ago

      Absolutely. I used to work in a C++ shop writing mission critical software. We would be far more concerned to let a newcomer to contribute on existing C++ codebase than on existing Rust codebase. The newcomer surely would have a harder time to make a pull request that compiles and passes tests in Rust than in C++, but that is a good thing for maintainers.

    • kaba0 3 years ago

      (At least there is no UB in Brainfuck..)

    • c_crank 3 years ago

      Several successor languages have kept the simple syntax of C while eliminating a large class of warts. Hindsight and not having to keep a wide reaching set of compiler standards let a language have simple syntax and not so many problems with UB.

  • whyever 3 years ago

    > you can still read K&R in one day and keep most of the language in your head

    Does this include all 193 cases of undefined behavior?

    • tick_tock_tick 3 years ago

      I've always been taught and memorized 197 are you forgetting some? Most from those lists are very easy since they are basically the same error played out slightly different.

    • xscott 3 years ago

      I know you're just trashing C UB because that's fashionable, but if you really think about it there are two popular options, and you can't have both at the same time:

      a) UB enables valuable optimizations and is important to keep (or even add) when performance matters

      b) UB makes the language unusable/insecure to anyone but genius level experts and should be avoided

      Whenever someone (including famous/relevant people like Dennis Ritchie [0], DJ Bernstein [1], or Linus Torvalds [2]) tries to suggest cleaning up, removing, or simply not adding new cases of undefined behavior in C/C++, the optimization experts come running from the other room screaming about how important it is that "signed integer overflow must be undefined" [3] or else things will run a percent more slowly (signed overflow being just one example of UB). Also there are people who suggest adding new UB to Rust [4].

      So really, either Rust is significantly slower than C because Rust doesn't have the UB you're criticizing, or C could be a cleaner language without compromising on speed and the compiler writers and standards committees are wrong. You choose, but both options are considered heresy.

      [0] https://www.lysator.liu.se/c/dmr-on-noalias.html

      [1] https://groups.google.com/g/boring-crypto/c/48qa1kWignU

      [2] https://lkml.org/lkml/2018/6/5/769

      [3] https://youtu.be/yG1OZ69H_-o

      [4] https://www.ralfj.de/blog/2021/11/24/ub-necessary.html

      • chlorion 3 years ago

        Rust does actually do the many of the same optimizations, and in unsafe rust this can and does lead to UB.

        An example of this, is aliasing mutable references. Rust has been designed to assume that two mutable refs will never alias, if you attempt to do this it is instant UB. My understanding is that even creating the aliasing reference is UB, even if you don't use it.

        Another example would be uninitialized values. In rust, the compiler assumes that values are always "valid" and initialized. Since you need to allocate memory before writing to it, you need some way to safely have uninit values, and this is what the MaybeUninit wrapper type is for. The wrapper allows you to safely have uninit values, and once you write to them you can tell the compiler that they are initialized, but if you tell the compiler before on accident, it is UB.

        References also are guaranteed to point to initialized and valid data, and can never be null, though my understanding is that there is some uncertainty about the exact rules of this with regards to uninitialized values and the exact semantics may change in the future.

        (There are also a lot more things that I don't know very much about!)

        All of these things are assumed to never happen, and optimizations are performed based on that assumption.

        The nice part about rust, is that it makes it impossible to represent invalid states using the type system!

        For aliasing, you can only have a single live mutable reference at any time, attempting to create a second one while another is live is a compile time error!

        For uninitialized values, you simply can't create uninitialized values at all in safe rust. My understanding is that the only way to create an uninitialized value is with the MaybeUninit type or by using raw pointers.

        So rust still has heaps of UB, but it doesn't allow you to do it by default, so you still get some of the optimizations you'd expect.

        I think there are some cases where rust is missing out on optimizations though, like with signed int overflow for example, and probably more that I don't know about!

        • xscott 3 years ago

          I can see your enthusiasm for Rust, but my comment was more directed to whether criticisms of UB in C can be taken seriously considering suggestions to remove it where possible have almost always been shot down in the name of speed.

          > I think there are some cases where rust is missing out on optimizations though, like with signed int overflow for example

          Before you push for UB in safe Rust, I politely suggest you write your own benchmark for this in C or C++. Try a few combinations of { signed, unsigned } X { 32 bit, 64 bit } X { gcc, clang }, compiled at -O2 or higher, and see what you get for results. Maybe throw in -fwrapv for some of the runs. My own conclusion on my own benchmarks is that UB advocates are mostly wrong.

  • the__alchemist 3 years ago

    I think this is often a function of Rust OSS libs vice the language itself. The embedded Rust community has created and promoted bad APIs. I think it's worth pointing this out and building other APIs, even though this doesn't endear you to the Rust community. I'm worried people will get the wrong idea about embedded Rust ergonomics and attribute these APIs to the language itself.

    • adamnemecek 3 years ago

      Can you talk more about this? I'm very curious.

      • bsder 3 years ago

        Well, for starters, allocators. The standard library is now making adjustments around that, but it's not mainstream and most people don't use them and they don't get the same level of attention as the happy path.

        However, Rust, the language has its issues on embedded.

        Rust's ownership model is directly at odds with lots of embedded. These bits inside a register are owned by the ADC and these bits inside a register are owned by the DAC is not a happy thing in Rust.

        Lack of arbitrary sized integers and how they slice.

        Cargo. Quite annoying to deal with cargo and an embedded toolchain. The Rust embedded guys have done really good work if you're on ARM. If you're not, good luck.

        That having been said, if you have to go implement something like Reference Counting in something not Rust, you will weep tears of blood debugging every single time your reference counts go wrong.

        Embedded is engineering. It has tradeoffs. That's life.

      • steveklabnik 3 years ago

        I do not have as strong of feelings as your parent, but:

        1. A lot of the APIs make use of the typestate pattern, which is nice, but also very verbose, and might turn many people off.

        2. The generated API documentation for the lower level crates relies on you knowing the feel for how it generates the various APIs. It can take some time to get used to, especially if you're used to the better documentation of the broader ecosystem.

        3. A bunch of the ecosystem crates assume the "I am running one program in ring0" kind of thing, and not "I have an RTOS" sort of case. See the discussion in https://github.com/rust-embedded/cortex-m/issues/233 for example.

      • kramerger 3 years ago

        Every time I play with embedded Rust I get this feeling that some of the people driving it are more into language esthetics and don't have experience in real-world embedded systems.

        For example, I see inefficient patterns that are common in frontend world but have no place in an embedded system being promoted as "proper" way of doing things.

      • the__alchemist 3 years ago

        The other replies reflect my thoughts, so I don't have much to add. So, these are elaborations on those:

        It appears most of the peripheral support libs (eg those that use Embedded-HAL traits) are not designed with practical ends in mind; I've found it easier for all I2C/SPI etc devices I've used to start from scratch with a `setup` fn with datasheet references, than DMA transfers. So, you have these traits designed to abstract over busses etc; they sound nice in principle, but are (so far) not useful for writing firmware.

        I get a general sense that the OSS libs are designed with a "let's support this popular MCU/IC, and take advantage of Rust's type system and language features!" mindset. A bare minimum is done, it's tested on a dev board, then no further testing or work. There are flaws that show up immediately when designing a device with the lib in question.

        So, at least for the publicly-available things, they're designed in an abstract sense, instead of for a practical case.

  • dannymi 3 years ago

    >Adding too many abstraction can be exceedingly dangerous in an environment where not having a full view of how memory and hardware registers are laid out leads to even worse errors than just buffer overflows.

    svd2rust is pretty good for having safe abstractions for hardware registers. That said, as an example, no, the type system doesn't prevent you from deallocating your DMA buffer while the hardware is using it--I don't think it's reasonable to add that to the type system (and the type system right now doesn't know about DMA).

    • the__alchemist 3 years ago

      Garbage in, garbage out! Svd2rust is a great tool, but the patching process (YAMLs) is currently not user friendly due to silent failures. Root cause is hardware makers putting out bugged SVDs that need patching.

      I think re DMA buffer lifetimes, the easy approach is static buffers; they never drop.

  • kaba0 3 years ago

    C has beyond useless “macros”, they should not be compared with Rust’s, that are actually useful.

    • foldr 3 years ago

      C's macros are primitive and unsafe but by no means useless. Here's a somewhat silly example from embedded programming. I wanted to embed the bitmaps of a small set of characters for use on a bitmapped monochrome display. It was easy to define macros CHAR_GRID, _ and X such that e.g.

          const uint8_t zero[] = {
            CHAR_GRID(
              _,X,X,X,_,
              X,_,_,_,X,
              X,_,_,X,X,
              X,_,X,_,X,
              X,X,_,_,X,
              X,_,_,_,X,
              _,X,X,X,_
            )
          };
      
      desugared to a column-major array of 5 bytes.

          #define CHAR_GRID(c1r1, c2r1, c3r1, c4r1, c5r1, \
          c1r2, c2r2, c3r2, c4r2, c5r2, c1r3, c2r3, c3r3, c4r3, c5r3, c1r4, c2r4, c3r4, c4r4, c5r4, c1r5, c2r5, c3r5, c4r5, c5r5, c1r6, c2r6, c3r6, c4r6, c5r6, c1r7, c2r7, c3r7, c4r7, c5r7) \
       c1r1 | (c1r2 << 1) | (c1r3 << 2) | (c1r4 << 3) | (c1r5 << 4) | (c1r6 << 5) | (c1r7 << 6), \
       c2r1 | (c2r2 << 1) | (c2r3 << 2) | (c2r4 << 3) | (c2r5 << 4) | (c2r6 << 5) | (c2r7 << 6), \
       c3r1 | (c3r2 << 1) | (c3r3 << 2) | (c3r4 << 3) | (c3r5 << 4) | (c3r6 << 5) | (c3r7 << 6), \
       c4r1 | (c4r2 << 1) | (c4r3 << 2) | (c4r4 << 3) | (c4r5 << 4) | (c4r6 << 5) | (c4r7 << 6), \
       c5r1 | (c5r2 << 1) | (c5r3 << 2) | (c5r4 << 3) | (c5r5 << 4) | (c5r6 << 5) | (c5r7 << 6)
    • lenkite 3 years ago

      Something like the Flecs ECS (https://www.flecs.dev/flecs/) which makes Rust's Bevy team jealous makes extensive use of C's "useless macros".

    • jimbob45 3 years ago

      While that’s true, preprocessors are pretty trivial to write these days if you want macros that the language doesn’t support. Racket excels at this.

      • sanxiyn 3 years ago

        Which preprocessor do you recommend for writing C?

        • bsder 3 years ago

          None.

          Use an actual different language. Ada, Rust, Zig, D, Lisp/Scheme/Racket, Tcl, Forth, etc. ... something other than C.

          Don't preprocess C into a slightly broken other language that you wish it were. Use C as C, or use something else.

        • jimbob45 3 years ago

          https://stackoverflow.com/a/3685576

          There’s one approach. I wouldn’t personally recommend using macros outside of include guards and file inclusion. Still, if you need more functionality, the methods exist.

  • mlindner 3 years ago

    > you can still read K&R in one day and keep most of the language in your head

    People who say this somewhat perplex me. Yes you can get the syntax of the language down in a day, but that does little to stop you from running into your first Bus Error or Segmentation Fault within the first 30 minutes of trying to write any software, not to mention all the hidden errors/exploits you've put in your code that are only a platform switch or a compiler version change away from being found explosively. And you can completely forget trying to write a multithreaded C application, which basically confines you to very slow single-threaded code, completely tanking performance versus even the slowest dynamic language that supports multithreading, erasing any advantage for using C.

    This is not a personal attack but when I have to try to come up with an assumed background for people who say this it usually involves some assumptions that the person isn't keeping in touch with the "real world" of some sort. I have trouble rationalizing it otherwise. Thus I'll usually ask what their background is when they say this to try to make sense of things.

    The only places C is still the optimal choice is where C is already being used or in extreme platforms where there aren't good toolchains (various ASICs/rare 8bit microprocessors). There's zero reason to use it otherwise.

    > the pain points cough macros are still often pain points in Rust.

    Hygenic syntax checked macros are an entirely different animal than just string insertion/substition macros. I don't think this comparison is fair.

  • bfrog 3 years ago

    You can and do use simple pointers in Rust, nothing prevents this.

    The abstractions can be more used like static interfaces you want to reuse. E.g. a byte stream interface, a regmap interface, etc.

    • 59nadir 3 years ago

      I think there have been a few posts pointing out that the ergonomics around using just pointers in Rust is lacking severely in comparison to languages like Zig and Odin.

  • RcouF1uZ4gsC 3 years ago

    > I think the biggest thing is that systems programming still requires a language that gets out of the way so you can focus on very technical problem domains where what the hardware is actually doing really matters.

    Just the fact that Rust doesn’t do implicit integer conversions is by itself a huge win over C which has promotion rules that can easily trip you up when you are trying to exactly specify bits.

  • pjmlp 3 years ago

    > You can still read K&R in one day and keep most of the language in your head

    Except that isn't what most compilers expose, including the UB semantics.

    One is in for a sea of surprises when trying to write portable C code and using K&R C as language reference.

  • loudmax 3 years ago

    Are tracking memory allocation or variable types not pain points for large complicated programs?

    • softirq 3 years ago

      In my experience as a kernel programmer tracing allocations isn't the hard part, it's keeping the correct view of hardware state and the different type of mappings at play, be it an MMU or IOMMU device mapping, and register state. Use after free bugs and overflows do happen, but there are more and more tools that come out every year that can find these things in C code, some of them are even hardware based. IMO the code quality of the kernel is very high and the defect rate isn't greater than projects I've worked on that use garbage collection.

  • ljlolel 3 years ago

    Try Zig as a C replacement.

    • dleslie 3 years ago

      Does Zig run on everything C can, including weird devices?[0]

      If the answer is still No, then it can't replace C.

      0: https://en.wikipedia.org/wiki/Small_Device_C_Compiler

      • kristoff_it 3 years ago

        Zig has been able for a while now to compile to C code, which is part of our bootstrap procedure now, so you can compile that output with an appropriate C compiler and target whatever you want.

        We do have plans for adding backends for unconventional targets eventually.

    • softirq 3 years ago

      I'm not really looking for a replacement for C, the ecosystem of system programming is still really C centric, it would take a large shift in the industry for me to justify investing in another language. I've dabbled with Rust only because early support was merged into Linux. IMO most languages are only marginal improvements over the previous generation that don't outweigh fighting against the entrenchment of expertise, documentation, and integration that come with established languages. It's also hard to be a true expert at a low level domain and multiple programming languages unless you're willing to give up all of your free time.

      • moonchrome 3 years ago

        Marginal improvements add up.

        • softirq 3 years ago

          I agree, at some point they do build up to the point that change is inevitable. There's usually a lot of false starts along the way (AI is a good example of this). No matter what it is, there's going to be a lot of fighting against entrenchment and a lot of us old timers simply have to retire for newer developers, whether they be Rust developers to come in and elicit change.

thesuperbigfrog 3 years ago

"The top three challenging areas of Rust for current Google developers were:

  * Macros
  * Ownership and borrowing
  * Async programming
"

Async programming is the area I would like to see the most improvement, especially in the standard library.

So much concurrent and parallel Rust code relies on third-party libraries because the standard library offers primitives that work but lack the "creature comforts" that developers prefer.

It would be really nice if the Rust standard library were to get structured concurrency similar to what Ada has:

https://en.wikibooks.org/wiki/Ada_Style_Guide/Concurrency

https://learn.adacore.com/courses/Ada_For_The_CPP_Java_Devel...

  • no_wizard 3 years ago

    Color functions are just not the right road to go down. Something akin to Javas new green threads would be better, or Go style coroutines.

    The path Rust is going means async becomes viral, and is something I dislike a lot about JavaScript[0] and other languages I’ve worked in[1].

    I’d love to see Rust avoid this trap.

    [0]: I work in TypeScript in actuality not sure which to use here. It’s certainly by far the language I’ve used the most in my career now.

    [1]: I remember it infected Python too and it was a pain as well when I did Python development years ago.

    • steveklabnik 3 years ago

      Do you have a way to accomplish this while still staying within the other various constraints that Rust's async is under? I don't believe it is reconcilable.

      For those not aware of the history and looking for background, I laid it out here: https://www.infoq.com/presentations/rust-2019/ and here https://www.infoq.com/presentations/rust-async-await/

      Those style systems are useful and have advantages, but they also have disadvantages. Not every tradeoff is a good call for every system, and that goes both ways in this scenario.

      • no_wizard 3 years ago

        Make it an optional crate like `no-std` is, where you specify it in your cargo.toml

    • littlestymaar 3 years ago

      I really wish people stopped using this concept of “function colors”, especially in the context of Rust because the `async fn`/“regular function” split is strictly equivalent to the `try_something()`/`something()` split (the first one being fallible and returning a `Result` in case of failure). `Result`s and `Option`s are coloring the stack in exactly the same way a `Future` does (and `async` is pure syntactic sugar on top of future).

      So, someone may like exceptions and green threads more than `Result` and `async` (and this is a completely valid PoV, even though I personally like the explicitness better), but thinking `async` is somehow special is just a conceptual mistake.

      Edit to give an little more substance to the parallelism:

      If you want to call a fallible function inside an infallible one, you MUST handle the result. If you want to use `?` then your function MUST return a Result.

      Symmetrically, if you want to call an async function from a non-async one, you MUST `spawn` the future. If you want to use `await` then your function MUST be async.

      The only practical difference between async functions and functions returning a `Result` is that `Future` is a trait, not a struct like `Result` (and that means that your future may have a lifetime that's not visible in your function definition, which is an endless source of confusion for beginners).

      • kaba0 3 years ago

        While you are technically right, one area where the two are unlike is that you want to be polymorph with regards to async/non-async, while it is less often the case for error handling. That exact same sleep implementation, or db query should be usable both as an async call, as well as a blocking one and it is the caller that wants to decide that. Which is not trivial to do on the calling side from a language perspective as it then has to recursively decide the same for all its subcalls.

        The new experimental languages with effect types might be able to give us the best of both as they actually expose what you are talking about as an abstraction at the type level. We will see.

        • ithkuil 3 years ago

          Yes. To make this more concrete: If your non-async function uses a parameter whose type is a dyn trait or a generic bounded by a trait, it can call the methods of that trait and thus potentially call different implementations of that trait.

          Currently, none of those trait implementations can be async because that would change the function signature.

          So the only option you have in a trait implementation that needs to call a library that happens to use async apis internallt, is to "block_on" the async. Unfortunately iirc blocking on an async like that is executor specific and your impl must pick a concrete async executor which may be different from the one used elsewhere in the program.

        • littlestymaar 3 years ago

          > While you are technically right, one area where the two are unlike is that you want to be polymorph with regards to async/non-async, while it is less often the case for error handling.

          The reason why we don't want to be polymorphic between fallible and “infallible” functions is that we put a clear social hierarchy between “properly handle error cases” and “panicking”. Using panics instead of `Result` make error handling much less cumbersome in Rust too, but we've clearly internalized that this isn't something you should do for real. Symmetrically, I'd argue that there's no much point to have both async and sync interfaces, if the user just want the quick and dirty approach, `spawn_blocking` is barely more typing effort than `unwrap`. The broad developer community (most of which having programmed well before Futures/Promises went mainstream) disagrees with me on that point, but that's a cultural thing.

          Oh, and Result/Options aren't the sole “stack coloring” thing in Rust either: if you have an “owned” variable down the stack, then you need to either change your entire call stack to take the variable “by ownership” instead of “by reference”, or you can `Clone` it.

          And you known what's even worse than this: `&mut`, because then you have no quick-and-dirty fallback (cloning and mutably borrowing the clone variable means you're now dealing with a reference that has a much shorter lifetime than the original one and it only works if your reference doesn't leave the current scope).

          As a personal anecdote of someone that's been doing Rust full time for 6 years now, I've encountered the `Option`/`Result `/“owned”/`&mut` function coloring problem many times, and exactly zero times the “async/blocking” function coloring problem. Yet for some reason people are obsessed by an old rant about JavaScript's callback hell ¯\_(ツ)_/¯

          • kaba0 3 years ago

            Let’s not make this discussion so rust-specific — panics are not used that way in rust for a reason. But exceptions (especially checked exceptions like in java) don’t have that problem, and are exact analogues to Result types. The point is, some way or another that parse function can fail and you may want to handle it. The caller can easily decide that from afar.

            This is not true of async/blocking — there can be semantic differences between two, otherwise equivalent implementations, and it is a recursive problem as I mentioned — how should an async-block-async call chain work exactly? Java can decide it at runtime, while rust with its tradeoffs meaningfully can’t - but it is a net negative tradeoff in its case.

            • littlestymaar 3 years ago

              > Let’s not make this discussion so rust-specific

              But this is a discussion about Rust! And my entire point is that async/await is entirely consistent with Rust's overall design.

              > But exceptions (especially checked exceptions like in java) don’t have that problem, and are exact analogues to Result types.

              Unchecked exceptions don't have this problem (but checked exceptions do), and that's exactly my point. Async/await vs green-thread is exactly the same trade-off than Result vs exceptions: one is “simpler to use”, the other is “simpler to read”. After years of programming, I personally came to the conclusion that we spend more time reading code than writing it (and it's going to be even more true in the near future with LLMs) so I lean on the Result/await side of things, but I don't have fundamental objections against green thread and exceptions.

              I do have a fundamental objection against the idea that “async” is somewhat special.

              > This is not true of async/blocking — there can be semantic differences between two, otherwise equivalent implementations

              Result vs exceptions have also a significant semantic difference: unwinding, and especially the fact that you can trigger unwinding at any point. This is a significant issue when you're dealing with pointers.

              > how should an async-block-async call chain work exactly?

              I don't really understand this question. An async function is just a regular function that returns a Future, and yes since there's no marker for blocking function you can definitely call one inside an async context, even though it's often a very bad idea from a perf PoV (well it depends, locks are mostly fine, but you need to use them with caution).

              In fact, the `async` marker in functions doesn't bring much (again “async function are just regular functions that return a Future”), and it would make much more sense to have a `blocking` stack-contaminating marker on function that call a blocking syscall in order to avoid performance problems due to those, but we can't have nice things because something Path Dependence something…

              • kaba0 3 years ago

                I’m talking about checked exceptions — they are absolutely analogous to a Result<ReturnType, ExceptionType> in Rust, that is they are part of the type signatures. It is no longer the example you are talking about re panics vs Result.

                • littlestymaar 3 years ago

                  Checked exceptions are indeed similar to Result, but as such they also have the same issue of “coloring” the call stack, so I don't really see what's the argument you're making here…

                  • kaba0 3 years ago

                    My whole point: error handling is decided at the caller-end trivially. They can try-catch/use ?/just unwrap, whatever it doesn’t concern the implementation. Async/blocking is not like that, as it can itself recursively contain similar “decision-points”, and the final caller cares about all of them, as it might change semantics.

                    • littlestymaar 3 years ago

                      But if you catch an error in the middle of the call-stack, then the caller don't have access to this error, so this has the exact same semantic implication as well! And the final caller would sometimes care about that, but no luck, yet I've never seen anyone complaining about how `catch` is terrible for that reason…

                      Of course it doesn't happen too often, but nor does your recursive async/blocking function example (been using async/await for a decade now, and I've never encountered the issue in actual code) and I suspect that for most purpose, using `block_on` in the blocking function is the sensible thing to do, and it has the same role as a `catch`: the upper function has no way to know there was actually some async stuff under the hood.

      • valzam 3 years ago

        The main difference to me is that Async/await tends to permeate your whole codebase. Once one part of the system is Async/await everything is. With Go I can write most of my code synchronously and maybe somewhere down the stack make 3 HTTP calls in parallel without having to change anything about the calling functions.

        • littlestymaar 3 years ago

          > make 3 HTTP calls in parallel without having to change anything about the calling functions.

          Exception that now you need to bubble up the error condition comming from these functions (if you function didn't have other error already).

          And in fact, adding those call do change things about how the calling function is run (yields point are inserted and the function isn't being run sequentially anymore), it's just not visible in the code, exactly like exceptions vs explicit error return values.

      • rascul 3 years ago

        > not a struct like `Result`

        Minor nitpick, but it's an enum.

        https://github.com/rust-lang/rust/blob/master/library/core/s...

        • littlestymaar 3 years ago

          Indeed, thank you. (and since it's now been 2h since I posted this, my stupid mistake will live forever)

    • shellac 3 years ago

      For many languages I absolutely agree: stackful coroutines are the way to go since the programmer experience is much smoother, with fewer hiccups like having different kinds of functions, or being unable to yield in loops. Lua, Go, and now Java got this right; python, javascript, and c# have to live with a bit of a mess.

      But Rust is not a language which can dictate its execution environment. It needs to be able to exist in a C-ish world, and that's not something that supports yielding. It's a shame, but at least you can write kernel modules in Rust.

    • kaba0 3 years ago

      > Color functions are just not the right road to go down. Something akin to Javas new green threads would be better, or Go style coroutines

      I agree with you on this in case of high-level languages. Rust is not that, and wouldn’t be half as interesting that way — but by going the system/low-level language route it does have to make certain design decisions that are not ideal. They can’t do what java’s loom does as it requires knowing every method implementation, which is fine with a fat runtime, but is not possible in case of Rust, with plenty FFI boundaries, etc.

    • nu11ptr 3 years ago

      While I agree and would prefer green threads to async myself, I believe Rust chose async/await because it was shown the former could not be done as a "zero cost" abstraction and didn't interop with C well (somebody correct me if I'm wrong on that).

    • shadowgovt 3 years ago

      At this time, I believe Rust's approach is complicated but that's because correctly using "bare" threads is complicated. Goroutines simplify the problem but introduce runtime performance overhead that may not be suitable for applications Rust is used for.

      (Generally, I avoid this problem these days by avoiding threads in favor of other abstractions or multiple processes communicating over an RPC channel).

    • gpderetta 3 years ago

      I vastly prefer stackful coroutines than stackless, but of all the languages, rust is probably the one that can justify that decision the most.

    • hgomersall 3 years ago

      "Color" in this case just means signature. Of course you can't mix signatures - why would you want to? You might as well argue that Rust shouldn't have had any more than a single integer argument to every function. Async in rust essentially desugars to a return trait impl.

      • zaphar 3 years ago

        That isn't quite true. An async signature forces it's caller to be async as well. This is because of it's semantics. Most signatures don't infect their callers signatures like that. async is qualitatively different in this regard.

        • hgomersall 3 years ago

          No it doesn't. Your can call an async function in rust from anywhere, you just can't await on it outside of an async function.

          • zaphar 3 years ago

            The async function also can't await outside of an async runtime. Which means I cant just call it and expect it to work. I need to wrap the call in an executor of some sort. If the async function doesn't do any awaiting itself then it doesn't even need to be an async function.

            So in practice my statement stands.

            • hgomersall 3 years ago

              But that's true of everything. You can't use a struct except through an "executor" of some sort. Once you've created it, it sits there and does nothing until you exercise it through its methods or other methods that take it. In that case, there's no ambiguity because the type is explicit. I guess the problem is that really `async fn` looks like a function, but isn't really a function, it's a future. The semantics of a future are quite different to the semantics of a function. That's only a problem if you think the semantics of `async fn` should closely reflect a function.

              The thing is, once you grok async in rust, other things make sense, like being able to construct futures by implementing `Future` on a struct. You don't actually need `async fn` to do async rust. It's just syntactic sugar, just like await is.

            • lcall 3 years ago

              I'm no expert on all the details, but I found you can use (at least with tokio) runtime.block_on() to call async (sqlx) code from non-async (my) code. Ctrl-F to see my other comment about block_on, here.

  • mjhay 3 years ago

    > Async programming is the area I would like to see the most improvement, especially in the standard library.

    > So much concurrent and parallel Rust code relies on third-party libraries because the standard library offers primitives that work but lack the "creature comforts" that developers prefer.

    This seems to be an repeated antipattern with a lot of languages/ecosystems, resulting in fragmented and half-baked solutions. A fully-featured async standard library involves making a lot of opinion-based decisions, and not everyone will be happy. But it's better for 80% of people who probably don't care that much, and nothing stops the other 20% from implementing libraries for their use-cases.

    • pie_flavor 3 years ago

      As news of Rust's decisions several years ago reaches people only now, it is fun to watch people make predictions about 'the future' that have not been borne out in reality one smidgen. Just using the library everyone else uses has also worked for 80% of people that don't care that much.

  • jimbob45 3 years ago

    Is there any Rust outfit out there that doesn't discourage macro use? For that matter, is there any team with a language out there that encourages macro use when working as a team?

    • giraffe_lady 3 years ago

      I've never seen a team that encouraged writing new macros to solve routine problems. But I've certainly been on teams that made heavy use of a few carefully deployed macros to solve recurring problems specific to that codebase.

      I think elixir's sigils are probably the closest thing I've seen to "routine, encouraged macro use." Since almost every application will end up with a bit of template lite almost-dsl pseudo language for something or other. They're simpler than defining a grammar & writing a parser and more maintainable than regex.

      • 59nadir 3 years ago

        I think Elixir is a bad example here because it's one of the eco systems that, while they preach "Use a function if you can!" very loudly, use macros much more heavily than other eco systems, often in places where they don't have to. Phoenix, the (unfortunate) flagship library, abuses macros all over the place where even relative beginners can see that they didn't need to (see [0] for example). It's incredibly badly designed overall and these things have set the tone (especially since a lot of Elixir programmers are in reality just Phoenix programmers).

        So, while macros are "discouraged" in Elixir, in practice they are very much encouraged by several prominent libraries. Picking on Phoenix is very easy because it's so blatantly bad in this regard (and others) but it's almost impossible to do useful things with Ecto if you go outside the macro bubble, etc., as well.

        Example that shows how an eco system that definitely could have done stuff with macros (Clojure) has correctly decided that writing functions that take data is better than using macros:

        Elixir and `Plug.Router`:

            defmodule MyRouter do
              use Plug.Router
            
              plug :match
              plug :dispatch
            
              get "/hello" do
                send_resp(conn, 200, "world")
              end
            
              forward "/users", to: UsersRouter
            
              match _ do
                send_resp(conn, 404, "oops")
              end
            end
        
        
        Clojure and `reitit` (https://github.com/metosin/reitit):

            (def router
              (r/routes
                [["/hello" {:get (fn [r] {:status 200 :body "world"})}]
                 ["/users" {:name :users
                            :router users-router}]
                 ["*" {:get (fn [r] {:status 404 :body "oops"})}]]))
        
        P.S. I've used Elixir since 2015, this is not an opinion I've developed at a glance.
    • rockemsockem 3 years ago

      So far I've only written rust on solo projects, why is macro use often discouraged? Does that discouragement extend to little things like derive?

  • VWWHFSfQ 3 years ago

    rust team needs to abandon their own execution runtime and just bless tokio and pull it into std. They're doing nobody any favors right now

    • Animats 3 years ago

      I'm struggling to get tokio out of my executable. I'm not using it, but stuff keeps pulling it in. Async contamination is a huge problem. For highly concurrent code with threads running at different priority levels, if async gets in there it makes a mess.

      • jrockway 3 years ago

        You know what they say about libraries; if you're not having problems, you're not using enough of them.

        On a more serious note, "I want other people to write my code, but they're not following my standards" is rarely a sympathetic point of view.

      • arijun 3 years ago

        Sorry if this is ignorant, but what’s the difference between async and concurrent? Is the problem that async schedules everything itself?

        • Animats 3 years ago

          "Async" is optimized for the special case of a server that is I/O bound and spends most of its time waiting for network traffic. This covers most webcrap, so many people who do nothing else want that. It also works like JavaScript, a model with which many web programmers are comfortable.

          It's a bad fit if you have enough compute work to keep all the CPUs busy. Then you're dealing with thread priorities, infinite overtaking and starvation, fairness, and related issues.

          • flylikeabanana 3 years ago

            Isn't this the difference between concurrency and parallelism? Like you said, Tokio (I'm coming from effect systems in Scala, the current generation of which take heavy inspiration from Tokio) is good for informing your program when your code is blocked so it can perform some useful work elsewhere, which is a fundamentally different problem to parallelizing code. So if I'm understanding your complaint right it's that Tokio sneaks in when its concurrency features aren't particularly useful for parallelization?

            • Animats 3 years ago

              Correct.

              I have an unusual application, a metaverse client for big 3D worlds. It has to deal with a flood of data while maintaining a 60 FPS frame rate. It's essential that the rendering thread(s) not be delayed, even though other background threads are compute bound dealing with a flood of incoming 3D assets. This does not fit well with the async model.

              This sort of thing comes up in games, real time control, and robotics, but is not something often seen in web-related software.

              • yazaddaruvala 3 years ago

                > This sort of thing comes up in games, real time control, and robotics, but is not something often seen in web-related software.

                I think you're not familiar with web-related software.

                > It's essential that the rendering thread(s) not be delayed, even though other background threads are compute bound dealing with a flood of incoming 3D assets.

                This is exactly how the browser behaves, and why Javascript needs to be entirely async on the main thread.

                For your application it should be very easy to spin up a render thread (pinned to a single core or whatever priority mechanisms you want to use) that loops, and use message passing to get the results from Tokio based futures.

            • carlmr 3 years ago

              Not OP, but I think the problem they are trying to explain is that if you create an async function it can only be called from other async functions, so it's quite an infectious concept.

              If you create a library that uses async, you're forcing everybody that uses the library into async as well (with the same executor).

              If somebody writes a library now that's generally useful but uses async, it forces others to use async or rewrite the library themselves.

              On the one hand a lot of people put this down as whining about free code, which is somewhat true, but the infectious nature makes the whole ecosystem less useful if you want to build something non-async.

              • lcall 3 years ago

                I think this is not true. I found a way (in the async book or tokio documentation, somewhere near the end of the docs or end of a quick-start guide or such) to just call it and not have to make the calling function async, using runtime.block_on() .

                If you request here or via the email at my web site (in profile), I can provide a more detailed example from a test I have.

                (note to self: see fn test_basic_sql_connectivity_with_async_and_tokio() .)

    • estebank 3 years ago

      The Rust team doesn't have its own execution runtime. Tokio is certainly the closest thing to the "default".

    • sanxiyn 3 years ago

      Given that Linux kernel is using async Rust which is implemented on top of kernel workqueue, and tokio won't be usable in kernel, it is a good thing tokio is not blessed. That's what enables both userland and kernel to use async Rust.

      • jenadine 3 years ago

        There can be both a "blessed" executor, while staying optional for such cases.

        For example, Rust std lib has blessed Mutex. Even though these can't be used in the kernel, it is still good to have them in the std lib for >90% of normal crates.

    • steveklabnik 3 years ago

      That is never going to happen, for both good reasons and bad reasons.

      • VWWHFSfQ 3 years ago

        Yeah I figured. Fragmentation by design.

        • steveklabnik 3 years ago

          One person's "fragmentation" is another's "supporting multiple use cases is important, even if their requirements are divergent."

bigyikes 3 years ago

> Rumor 5: Rust code is high quality – Confirmed!

> The respondents said that the quality of the Rust code is high — 77% of developers were satisfied with the quality of Rust code.

Well, that’s exactly what I’d expect Rust developers to say. Nobody loves Rust more than Rust adopters. Would be interesting to see more objective measures of code quality (e.g. defect rate)

Also, the type of person to work on a Rust codebase might also be more likely to write high quality code in any language, as compared to the average developer (or even average Googler).

  • steveklabnik 3 years ago

    People used to say "when people are forced to use Rust at their job, they'll start hating it, only hobbyists use it and that's why it's so beloved." Do you really consider people learning Rust on the job to be "Rust adopters" who are partisan? Is anyone who ever uses Rust a "Rust adopter" who is unable to give an unbiased opinion? Who would be able to give that opinion, in your view?

    • izacus 3 years ago

      People who are adopting Rust now at their jobs are in most majority people who are so keen on using Rust to push it into organizational adoption. Most companies don't yet have legacy Rust codebases on which people are told to work, but instead choose to work on.

      This creates a self-selection, where Rust lovers work on Rust projects and report utopian happy-go-lucky times. It's normal for most technologies.

      • steveklabnik 3 years ago

        Do you think all 1,000 people that Google referenced here are people who are pushing it into organizational adoption?

        • belval 3 years ago

          It seems likely that the overwhelming majority of those 1000 people are pushing it into organizational adoption. Rust is not used enough yet that anyone will be forced to use it. Older C++ programmers at Google who are uninterested in moving aren't doing Rust.

          As a parallel. A team using Scala at Amazon likely uses it because everyone (let's say 90%) was on board. It's just not something you force on your team unless there is existing interest.

          Finally (an perhaps more importantly), the parent comment also mentioned older codebases. It does not seem likely anyone of those 1000 is currently doing maintenance so much as completely new projects/features. This tends to be work software developers enjoy more irrespective of language. So if you were pulled from maintenance in C++ to work on something new in Rust I'm pretty sure you'll say Rust is great just because you feel more productive.

          • steveklabnik 3 years ago

            > Rust is not used enough yet that anyone will be forced to use it.

            Google hired Ferrous Systems to train employees, as well as writing their own training curriculum. That sounds to me like people who would not otherwise use Rust being asked to use it at their job, and their job investing in their skills because they wouldn't or hadn't done it on their own. Is that different than "forced to use it"?

            > It does not seem likely anyone of those 1000 is currently doing maintenance so much as completely new projects/features.

            Google has been using Rust in android since 2019. That's four years. That is of course not "legacy" in any large sense, but at what point for you is something legacy? Does none of that work over four years count as "maintenance"?

            > So if you were pulled from maintenance in C++ to work on something new in Rust I'm pretty sure you'll say Rust is great just because you feel more productive.

            The start of this sub-thread, and a lot of the discussion inside of it, implies that people are using Rust simply because they want to, and not because it provides actual advantages. Is your position here that the sole advantage of Rust over C++ is that since projects are newer, they're better to work on? And if so, is that advantage illegitimate?

            • belval 3 years ago

              The first two points I don't really have anything to add on.

              > Is your position here that the sole advantage of Rust over C++ is that since projects are newer, they're better to work on? And if so, is that advantage illegitimate?

              No, to rephrase, I think that maintenance work on average feels less productive and more grueling because you can spend 3 days debugging for a single line change. With a new project (or any new feature) you get to write your own thing which feels more productive.

              I'm not saying people are less productive with Rust than C++. I am pointing out that there is a natural bias in the type of work that those two languages are being used for at Google and that this bias will impact self-assessed productivity.

              > Does none of that work over four years count as "maintenance"?

              Respectfully I think you are building a strawman because you (likely) work with Rust and enjoy it. Google is built on a staggering amount of 10-20 years old C++ codebases that have seen several runs of refactoring at this point and stand on the critical path of several of their most important products. Working on those is inherently slower and more meticulous than writing new Android features (even if 4 years old) in Rust.

        • c_crank 3 years ago

          Going by stackoverflow surveys, the majority of younger engineers would be happier to see it implemented than not, regardless of whether they've even used the language before.

          • steveklabnik 3 years ago

            The "most loved" statistic (which they have since renamed) counts people who have used Rust before, and want to continue using it. Unless you're suggesting that they're all lying, I don't see how that connects to "regardless of whether they've even used the langauge."

            Furthermore, at least in 2023, only ~25% of respondants are under 25. I'm not sure what counts as "younger" to you, but 37% are over 35, so it would seem that the survey skews older, not younger, to me anyway.

            EDIT: since you're now flagged into oblivion (I tried to vouch for you but it didn't work), that statistic is what they changed "most loved" from. It counts people who have used Rust before, and want to continue using it.

        • izacus 3 years ago

          Mostly? Yes. Google has more than 120.000 engineers. Those 1000 are less than 1% of engineers. It's like having one Rust dude in a 100 person company.

          Most of people work in other languages.

          • steveklabnik 3 years ago

            The blog post says only 13% of these people have had experience with Rust before joining Google. How do you think Google ended up with so many people who are hardcore evangelists, imposing their views on others? A significant focus of the post is about on-the-job training; do you think that these evangelists were created by these trainings, or did they come to these opinions on their own, and are now pushing for it inside of Google?

            Did Google only interview these Rust-loving developers, and none of the people they're supposedly pushing Rust upon?

            • izacus 3 years ago

              What on earth are you trying to say here?

              I'm just saying that Rust fans are currently self-selecting by choosing to work on Rust projects which skews the satisfaction numbers against other languages.

              That doesn't have a lot to do with previous experience - with Rust being so new, __MOST__ developers, even fans, don't have much experience with it.

              • steveklabnik 3 years ago

                I am trying to understand your perspective.

                I do not understand how what you're saying here relates to this post, which does not seem to be saying the things that you are saying. I do not understand how to reconcile "we paid to train 1,000 people on the job and here's what they thought" and "only 13% of people had Rust experience before this" with "Rust fans are currently self-selecting by choosing to work on Rust projects."

    • bigyikes 3 years ago

      > Do you really consider people learning Rust on the job to be "Rust adopters" who are partisan?

      Not necessarily, but it seems not unlikely.

      > Is anyone who ever uses Rust a "Rust adopter" who is unable to give an unbiased opinion?

      It’s still a relatively new and niche language, so, yes.

      > Who would be able to give that opinion, in your view?

      Nobody, and maybe that’s my real point, which is why I’d like some metrics to supplement the anecdotes. This especially applies to Rust, but I think also applies to any language.

      Note that I don’t mean to imply that there isn’t value in anecdotes. There is.

      • sanxiyn 3 years ago

        I agree the post could include more concrete metrics than self-assessment, but more than 1000 data points, even self-assessment data points, are usually not called anecdotes.

      • steveklabnik 3 years ago

        > Nobody, and maybe that’s my real point

        To be honest, this is what I've taken away from this conversation so far: it doesn't seem like anything will satisfy your desires here.

        > which is why I’d like some metrics to supplement the anecdotes

        This conversation started with you not liking that the measure of quality is subjective, which is fine. How would you objectively measure code quality though? What metrics would you have preferred to see, other than the ones in this post?

        • bigyikes 3 years ago

          I’m confused by your line of questioning. Why do you think I can’t be satisfied?

          I don’t have a problem with this Google poll. I think it is valuable.

          What would satisfy my desires is exactly what I stated in my original comment: something more objective, like defect rates of Rust code bases compared to non-Rust ones. Metrics like these have their owns problems, but would be a nice supplement to the opinion-based poll.

          Neither the objective metrics, nor the opinion poll can provide a complete picture, and neither is a substitute for the other. Both would be awesome.

          • steveklabnik 3 years ago

            Because you said

            > It’s still a relatively new and niche language, so, yes.

            If nobody is able to give a non-partisan reply, I don't see how you could ever be satisfied.

            > like defect rates of Rust code bases compared to non-Rust ones.

            Cool, thanks. That does seem like one that is more objective, though there are confounding factors in that too, because sometimes defects lurk without being detected. This stuff is hard!

            Google did put out something about this specifically on Rust (and others) use in Android, by the way, you might find that interesting: https://security.googleblog.com/2022/12/memory-safe-language...

    • scoutt 3 years ago

      > As an engineering manager...

      > unbiased opinion

      If the engineering manager after putting so much effort into switching to Rust, and trying to convince upper levels, putting their head at risk and after busting balls for months to make everyone learn Rust, comes into the office one day and asks if we love Rust, we the 77% that want to keep our jobs would answer "OF COURSE WE DO!!!!! COULDN'T BE HAPPIER!!!", with a big smile.

      • tasuki 3 years ago

        This hasn't been my experience at all.

        Me and all the other developers I know complain endlessly about the tools we're forced to use. We complain to anyone patient enough to listen, including all the engineering managers. No one I know ever got fired or laid off for this.

  • saulrh 3 years ago

    It's a bit unfortunate that they aren't sharing this statistic for other languages. For example, I'd bet that rather fewer than 77% of Google C++ devs are satisfied with the quality of C++ code. I know that, when I was at Google, I wouldn't have reported satisfaction with the quality of my own C++ code!

  • summerlight 3 years ago

    > Well, that’s exactly what I’d expect Rust developers to say. Nobody loves Rust more than Rust adopters.

    An increasing number of Android developers in Google are adopting Rust because of the org wide strategy rather than developer passion, so I guess the numbers in 2023 and 2024 would be more interesting to see.

  • hbn 3 years ago

    Also since it's a young language, a lot of these people are probably writing in a newer codebase that doesn't have 15 years of early fundamental bad decisions and hacky refactors to make it painful to work in.

  • mannyv 3 years ago

    I'm sure there are developers who are like "wow, the quality of my code sucks in this language." But I'm not sure where they work and what they do.

durandal1 3 years ago

After four months, only 50% of the developers though they were as productive in Rust as other language. Given that the respondents are arguably a very capable group of engineers, this doesn't seem that great for any company looking to adopt Rust.

  • saulrh 3 years ago

    Consider an alternative reading:

    50% of developers think they are as productive in a language they have four months of practice with as they are in a language they have fourteen years of practice with.

    50% of developers think they are as productive in a high-performance bit-bashing-capable language as they are in a high-level glue language.

    The people in this statistic are switching from languages they have years or sometimes decades of productivity in, and they're switching from languages like python and go and java. I see a lot of programmers who do similar switches never reaching productivity parity with C or C++. 50% of devs getting there in 4 months is amazing.

    • PartiallyTyped 3 years ago

      I am willing to bet that the type system and the opinionated way of doing things helps a lot here.

      Anecdata but I found myself very productive with Haskell when I was learning it for grad school, to the point where I knew that if it compiled, it was most likely right.

      I had similar experiences though not to that degree with Rust with very little time spent on it in comparison to Haskell. I feel a lot more comfortable sleeping at night over C or Python.

      • tasuki 3 years ago

        > Anecdata but I found myself very productive with Haskell when I was learning it for grad school, to the point where I knew that if it compiled, it was most likely right.

        I recently told this to someone. The very next day my Elm code compiled just fine but it had three relatively tricky logical bugs which took me two hours to find and fix.

        This was a rare enough occurrence that I remember it. Higher cosmic powers took note of my praise of strong typing and decided to teach me a lesson.

  • Fidelix 3 years ago

    "Anecdotally, these ramp-up numbers are in line with the time we’ve seen for developers to adopt other languages, both inside and outside of Google."

    "Overall, we’ve seen no data to indicate that there is any productivity penalty for Rust relative to any other language these developers previously used at Google."

  • stusmall 3 years ago

    At a past job we generally saw about 6 months until devs hit their stride and worked that into our hiring plans. We usually had them committing small, targeted changes within a week or two. They could usually take on tasks relatively independently in one of our simpler code bases after about 2 months. So this checks out.

    Weird enough we saw more junior devs pick it up faster. They had less preconceived notions and practices to unlearn and more willing to trust rustc. It's just that when they hit their stride they are still less productive than the senior devs.

    • wiz21c 3 years ago

      As an older dev:

      > They had less preconceived notions and practices to unlearn and more willing to trust rustc.

      this is really what I experiences: rust told me a thing or two about coding I never realized. And it took me pretty long to accept that :-)

      • stusmall 3 years ago

        Big same. I remember being not being able to share a reference to something allocated on the stack at the start of main to a background thread. I was like "come on, of course its safe. That was allocated before the thread was spawned. It's main. When it returns the application exits"

        But rustc's error messages helped it click that there might be a race condition on when the application returns from main and the background thread terminates. So it really needs to have the static life time to be safe. It's a small subtle thing but depending on the application it could lead to real bugs. I've definitely written variations of that bug in C before. A newer dev would have just accepted that flat out without arguing.

      • shadowgovt 3 years ago

        I've found Rust useful to study while I'm doing my primary job in C++.

        A lot of the features Rust offers regarding traits and types can be emulated in C++ with templates, but the way C++ does it is far more obfuscated. Seeing the same thing implemented in Rust helped me wrap my head around what some complicated template nestings were doing ("Oh, this is implementing traits!") in our C++ code.

  • Kranar 3 years ago

    Four months to become as proficient in a memory safe, data race safe, and high performance language as in other languages seems like an astonishing accomplishment.

    In fact it's frankly unbelievable, I'd have to imagine these guys are coming from a C++ background.

    • steveklabnik 3 years ago

      Anecdotally, C++ developers often have a harder time coming to Rust than folks from a more scripty background. They have to unlearn things, and that can be harder than learning things.

      You can see similar opinions expressed in this thread, like here, for example: https://news.ycombinator.com/item?id=36496654

      • jsgf 3 years ago

        Python programmers love Rust: - Lots of compilation errors, but then the code does what it says - Fast! That gives an immediate benefit that people coming from C++ don't see.

      • dureuill 3 years ago

        Really? Coming from C++, learning Rust felt very natural and welcome because it put a name and a structure to the mental model I had been using to keep a semblance of sanity while doing C++. Maybe learning primarily with C++11 was a factor?

        • steveklabnik 3 years ago

          It's a broad strokes kind of thing, not a universal thing. To be honest I think it has more to do with attitudes towards the language and compiler that tend to be more prevalent in C and C++ circles.

          That is, a lot of people expect that the compiler is a tool that must do what they say, not a collaborative partner. Back in the IRC days, people used to join, and be like "Rust won't let me do X. X is obviously okay. How do I get the compiler to shut up and do X?" and the reply would be something like "what if there was another thread? that's what Rust is saving you from" or "Rust's pointer rules are different than the language you're used to."

          Whereas folks from scripty backgrounds don't have preconceived notions of "I should be able to do this in this way," and so tend to trust the compiler more. Heck, that's why a lot of them are learning Rust instead of C; they know the compiler can help them out, and C's compilers cannot to the same degree.

          Now, that doesn't mean that scripty people don't struggle, or that C and C++ developers don't have their own advantages in learning. Just that I don't think it's straightforward to say who has the overall easier time. It's more of a "having learned something like C or C++ does not universally advantage you."

          • shrimp_emoji 3 years ago

            Ha. Given that the C and C++ compilers optimize your code by inferring undefined behavior and other crazy stuff like that, it's funny to imagine you're not just playing with just another code generator. :p

            (In fact, if you decide to be responsible about what you're telling your poor compiler to do and use a `size_t i` instead of an `int i` in your for loop, since iteration can never go negative and, if overflow happens, overflow on a signed type is worse since that's UB, I think you just made your for loop slower since all the optimizations the compiler was going to generate from inferring that iteration won't be infinite because that would overflow the `int i` and that would be UB so it can just ignore the possibility that the iteration would be infite went out the window and...man I really need to switch to Rust.)

            ((I've pretty painlessly learned and love Rust, btw. I'm still using C at home lately though cuz... masochism.))

          • dureuill 3 years ago

            Thanks for the elaboration. Now I'm wondering if this hasn't something to do with bottom-up people vs top-down people and their natural affinity towards lower level or higher level languages.

            I'm very top-down, always starting by sketching the API I'd like to see for the feature, and then filling in the implementation. A lot of people I know from C++ are bottom-up, they toy with the implementation and then go up to the API. I found that what they like in systems programming is being close to the machine, while my interest lies in the OS boundaries and interfaces, not really the hardware.

            I'm thinking maybe those different ways of approaching programming make it easier or harder to learn rust.

            • steveklabnik 3 years ago

              I think that's possibly insightful, yeah. A similar effect: a lot of people say "Rust is bad for exploratory programming because the compiler gets in your way." For me, it's fantastic for exploratory programming specifically because when I change something, it gives me a list of the other things I need to change! That's huge! But for others, it seems to harsh their buzz. I don't know how to reconcile these opinions other than "they're opinions and different people are different."

    • verdagon 3 years ago

      I don't think it has to be this hard to have memory safety, data race safety, and performance. After building and doing PL design in this space for a decade, I don't believe the assumption that Rust's or C++'s difficulties are inherent.

      I think C++ has its own legacy difficulties (which also make transitioning to memory safety tricky), and Rust's choice of borrow checking is only one (sometimes difficult) technique for getting these aspects. But there are almost a dozen other methods out there for getting memory safety besides RC, GC, or borrow checking.

      I rather think that these other approaches aren't mature enough yet to enter the mainstream, and we haven't seen them yet.

      • brigadier132 3 years ago

        How do you build a data race safe language without taking on the restriction of immutable data (which is imo, a much worse and bigger tradeoff than the borrow checker)?

        I actually really like the borrow checker as a tradeoff, I think it makes code much easier to understand and it makes all aliasing bugs impossible. The removal of aliasing bugs is I think an undersold benefit of using rust.

        • verdagon 3 years ago

          There are a lot of ways, but I think the most promising ones involve regions. We do this a lot manually in Rust, but a language could make this a first class concept. Some examples:

          * Vale combines generational references and linear types with regions to eliminate overhead.

          * Verona lets you divide memory into regions which can be backed by either arena allocation or GC. I think this is promising because for most GC regions you can completely avoid the collections.

          * Cone lets you put borrow checking on top of any aliasing memory strategy, so could be something like the best of all these worlds.

          * No language is doing this yet, but RC plus regions to eliminate the refcounts, then adding in value types for better cache usage, could be a real winner.

          On phone so it's hard to get links, but you get the idea. The nice thing about regions is that they allow composing borrowing and shared mutability, something that the borrow checker struggles a bit with. Regions let us alias freely, and then freeze an entire area of memory all at once. Not that Rust isnt a good approach (it is!) but there are some easier techniques on the horizon IMO.

      • dmytrish 3 years ago

        Please demonstrate a practical and memory safe systems programming language without borrowing.

        I'd be delighted to see it, because right now I am not aware of any practical way to have memory safe regions without static tracking of borrowing from these regions. It's either that or runtime checking.

        • verdagon 3 years ago

          This might be of interest: https://verdagon.dev/blog/first-regions-prototype

          It uses region-based static analysis without borrow checking: it doesn't impose aliasability-xor-mutability per object, or even per-region.

          Though, if you'd like to move the goalposts further to no form of borrowing at all, then I recommend looking at languages like Forty2 and Verona, they might be what you're looking for.

          • dmytrish 3 years ago

            I have already spent more time than I wanted on reading through verbose but elusive articles about Vale, without any insight into how this actually happens.

            I have already spent too much time trying to compile Vale compiler which is a weird mix of Scala and C++ with a small Vale driver. Once it is actually written in Vale without segfaults, I'll revisit the language again.

            Thanks for the Verona recommendation.

  • larsberg 3 years ago

    Gah! Typo there. It's over 50% (66.8%) in 2 months. And over 80% in 4 months. The chart is correct, not the text :facepalm:

    I'll see about whether I can push an update. Thanks for the catch!

    • larsberg 3 years ago

      Actually, it's complicated stuff pulling together two data points: 1) 2/3 of people are confident contributing in 2 months or less 2) And 50% of people are AS PRODUCTIVE IN RUST as they were in their other language within four months

      Given that #2 is talking about people who are all professional programmers and where only a small percentage of respondents previously knew Rust, that's pretty amazing to me.

  • tensor 3 years ago

    The more relevant claim is that they don't observe a difference in learning rust vs learning other languages. There is always a learning cost when adopting a new language that few people always know.

    This sort of surprised me, because rust felt a lot harder for me personally to learn than Go. But data is far more valuable than an anecdote so there you go!

  • albntomat0 3 years ago

    The article states, in the following sentence: “Anecdotally, these ramp-up numbers are in line with the time we’ve seen for developers to adopt other languages, both inside and outside of Google”

  • summerlight 3 years ago

    I've been studying C++ over 15+ years and still don't feel very productive thanks to the fear of getting paged every releases.

    • criddell 3 years ago

      Does Rust give guarantees around paging? Is Rust similar to C++ in that respect?

      • shepmaster 3 years ago

        FWIW, I believe the person you are replying to means "paged" as in "an alert was sent to my pager at 3AM that the entire system is down and I need to wake up and fix it", not the paging in/out of memory.

      • summerlight 3 years ago

        At least you won't get paged due to some weird memory bugs. Yes, this happens quite frequently. Worse, it's usually not something local to a single change but interaction across seemingly safe changes.

        • criddell 3 years ago

          Is well written Rust code better with respect to paging than well written C++ code?

          • brigadier132 3 years ago

            Rust code that compiles gives you certain guarantees that C++ code that compiles does not. The question isn't is well written code in one language better than well written code in another language. The question is, do I know this code is well written? In Rust you know, in C++ you don't without jumping through a bunch of other hoops.

            • criddell 3 years ago

              What I'm trying to get to is if the guarantees include better control over the heap and paging. Everybody wants to tell me C++ likely has bugs which I understand, but it's not what I'm asking about.

              Edit: I missed that the original person I responded to was talking about being paged when a problem arises and not about memory performance. I'm still curious though if Rust memory guarantees give the programmer better tools for dealing with memory paging.

  • lawn 3 years ago

    Only four months to become as productive in a new language as in a language they have years or even decades of experience in?

    Sounds incredible.

  • tedunangst 3 years ago

    I don't think that's bad, but it's a big stretch to say 50% at four months debunks the myth of six months.

  • VWWHFSfQ 3 years ago

    50% seems pretty good to me

epage 3 years ago

> Rumor 2: The Rust compiler is not as fast as people would like – Confirmed !

I wish there was more context to these, especially this one. For example, how much of this is perception compared to what they were used to (go?, Python?, C++?)? Or is it "any waiting is bad"?

From an improvement perspective, I'd also love to know why their builds are slow. Is it proc-macro heavy? Do they have wide and deep dependency graphs? Do they have large individual crates? And so on.

  • sanxiyn 3 years ago

    While I think n>1000 data points on Rust learning curve is informative, I think build time complaint is less so.

    This being Google, it probably means something like "this C++ build takes 24 hours locally, but thanks to magical distributed build infrastructure it completes in 10 minutes, whereas Rust build takes 18 hours locally but even with magic does not complete in under 30 minutes, which is too long". That is important to Google, but it is almost completely irrelevant to anyone outside Google.

    It is unclear to me whether improving rustc performance is the right solution to Google's problem. It is probable working on Rust integration to Google's build infrastructure is higher ROI than working on rustc.

  • norir 3 years ago

    > Or is it "any waiting is bad"?

    Yes, this is the problem. Waiting is always bad for productivity. Even a second is long enough to lose a bit of focus. When that stretches out to 10 seconds, it starts getting tempting to, say, check Hacker News and lose your train of thought. I believe that most of the programs that I might be tempted to write in rust could be written in an alternate language with a compiler that is up to 100x faster. Of course this hypothetical language would have to be simpler than rust and would lack many of its features. As it currently stands though, I believe that it will be impossible to make the rust compiler 10x faster, let alone 100x faster so it would be nice if there was more effort to design alternative languages that build on what we've learned from rust to make something better.

    • gaganyaan 3 years ago

      I don't really wait for the compiler much. One initial wait for a clean compile, sure. But after that, LSP mode means immediate feedback in my editor before I could even switch to my terminal

  • jsgf 3 years ago

    In my experience from a Googlish environment (tho mostly focusing on backend service development): 1. people don't know about check builds and have a much nicer iteration experience once they learn about it, 2: rust-analyzer red wiggles also help, and 3. a lot of the actual build time is from build/link of C++ dependencies from the rest of the codebase.

    • internetting3 2 years ago

      Hey jsgf! Your posts have caught my eye. We're planning on joining the YC Winter 2024 batch with a B2B infra project. Reach out to me at setbnb1@gmail.com if you are interested.

  • wredue 3 years ago

    Rust is basically entirely code gen. It’s no surprise that compiling is slow and speeding up compiling is a troublesome endeavour.

    Rust itself might as well be considered a highly constrained macro language at this point.

  • IshKebab 3 years ago

    Yeah most of these results are meaningless without comparison to other languages. Same with the "how quick is it to learn?" what are the equivalent numbers of Go?

    Based on my experience they're overselling how easy it is to learn and underselling the compiler speed.

    Compilation is fairly fast these days. I would say it's faster than C++ feature-for-feature, at least for clean builds.

    But on the other hand most people could probably learn all of Go in the time it takes to begin to understand the borrow checker.

    • nicoburns 3 years ago

      > Compilation is fairly fast these days. I would say it's faster than C++ feature-for-feature, at least for clean builds.

      Faster than C++ is of course very faint praise. C++ is also very slow!

    • pjmlp 3 years ago

      Only when considering clean builds as building the whole world from scratch.

      Which we seldom due on most C++ projects, we rather rely on binary libraries and build only our own code.

      Also when comparing with Delphi, Ada, D, or even Haskell or OCaml, it isn't that great.

      You might feel like pointing out that Haskell or OCaml can be even slower, which is true, however they package multiple toolchains and a REPL, and as of today Rust still isn't as flexible in having multiple toolchains for different purposes.

      • IshKebab 3 years ago

        > Which we seldom due on most C++ projects, we rather rely on binary libraries and build only our own code.

        Depends on the project. Many commercial projects do vendor dependencies and build them too, because you can't rely on the OS version. Especially on Windows or with more niche dependencies.

        Just compiling Boost takes 15 minutes - more than any Rust project I've ever compiled.

        Not sure what you mean about multiple toolchains.

        • pjmlp 3 years ago

          On Windows even more so, as most folks do heavy use of DLLs and COM.

          And yes exporting C++ from DLLs is compiler specific, which doesn't matter, as there is only one specific compiler version that is usually validated for the whole project delivery pipeline.

          Plus we can edit and continue on C++ Builder and Visual C++, with incremental compiler and incremental linker.

          There is a reason why so many companies forbid Boost.

          Regarding toolchains, JIT, AOT, bytecode interpreter, REPL. Here, 4 variants to compile and execute code, depending on release requirements and developer workflow loop, each with its own sets of plus and minus. It is great when there is a choice.

  • tracker1 3 years ago

    Hard for me to directly compare, but Rust builds that I have experienced have been comparable to large Node/npm projects built with webpack (JS tooling). Totally different space and output, but comparible.

    The long dependency trees are part of it, but usually not too bad and only really bad the first time, since you don't have to rebuild every crate every time (I could be wrong, but it seems that way). I haven't been using it day in, day out though. I've installed a few apps via cargo, and have done some experiments for service applications, and Tauri as well.

    As for the day to day use and how painful it is... I haven't had enough exposure to really comment on... it seems "fast enough" but I'm not running compiles often enough, simply because my knowledge and experience aren't really great in Rust. I've looked at it and played with it a few times, then I set it aside for months at a time and every time it's like I'm starting over.

    Where I'm working now, there are some serious issues that may result in areas needing better start time on services, so that may be an opportunity to advocate for Rust. I've never really loved C or C++, so I'm less inclined to want to use them.

predictabl3 3 years ago

If only we could harness the people who still insist that rust is all hype and engage in impressive gymnastics to ignore all evidence to the contrary...

Some of the stuff people say about Rust reminds me of iOS users talking about Android. "Tell me you are operating from a place of near total ignorance, without telling me that you're talking out your butt".

See: the number of people, here, acting like you can't do raw pointers in Rust, or acting like it's militant woke youngins forcing poor big Google to adopt a safer, productive language.

  • Ygg2 3 years ago

    “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

    ― Upton Sinclair

    Amount of dislike on HN for Rust is frankly unexpected, one part might be response to evangelization, but I've seen more hate on evangelization than actual evangelization. Sure, Rust ain't perfect but like C++ is even more imperfect. So that leaves me with job security in C++.

    • tasuki 3 years ago

      I don't see much dislike for Rust on HN. It's probably the most loved language around here.

      • Ygg2 2 years ago

        Not sure where you looking but any Rust tangential topic gets its share of Nirvana fallacies, "bypassing borrow checker" and other such fallacies.

Ygg2 3 years ago

Great post. Aligns with my experiences. Although who thought unsafe would be bigger hurdle than borrowing.

I do wish to know did Rust impact their velocity and by how much.

  • Macha 3 years ago

    I think Google has a lot of C++ programmers who may have assumed they'd have to use unsafe so they could continue to write everything like they did in C++ (much like many refuse to use features that aren't from ancient versions of C when writing C++), but then likely in practice ended up writing much less unsafe code than they thought they would.

  • Animats 3 years ago

    That can be a real problem. It's quite possible to reach a point in Rust where you have one borrow error that takes days of rewriting to fix.

    This tends to lead to people putting in unsafe code to work around a borrow restriction. I don't do that, but I don't have deadlines.

pvorb 3 years ago

I really like Rust and all its features, but I have a feeling that people really underestimate the importance of having a fast compiler. Being able to have a subpar compiler error sooner might still make you more productive than having good compiler errors but having to wait for the compiler all the time.

AtNightWeCode 3 years ago

Rust is for sure much easier to learn than some people claim. Some parts of Rust is different but it is not very difficult. I think the ecosystem is the major problem with Rust.

  • worik 3 years ago

    > I think the ecosystem is the major problem with Rust.

    How? Explain please

    • AtNightWeCode 3 years ago

      I mostly looked at Rust to replace tools written in GO and Webapis written in ASP.NET/C#. In both cases there seems to be options in Rust but not reliable ones. With reliable I mean that it is just too few people involved. Not that it does not work.

hgs3 3 years ago

No mention of Carbon? I was under the impression Google was designing Carbon to be their C++ successor?

  • steveklabnik 3 years ago

    To put it even more plainly than the others: https://github.com/carbon-language/carbon-lang#project-statu...

    > Carbon Language is currently an experimental project. There is no working compiler or toolchain. You can see the demo interpreter for Carbon on compiler-explorer.com.

  • estebank 3 years ago

    Without trying to sound dismissive, Rust is production ready today, Carbon isn't. Even if Carbon was significantly better, that alone accounts for the adoption of Rust and not Carbon today.

  • Hemospectrum 3 years ago

    Carbon is sort of a plan B, for working on existing C++ code that would be too difficult to migrate to Rust. It also doesn't really exist yet.

  • summerlight 3 years ago

    That's more of a moonshot strategy. Rust is more of a safer bet.

  • howinteresting 3 years ago

    Google is a very big company which has many parts that aren't all necessarily aligned.

pcthrowaway 3 years ago

The idea that people might spend 2 months learning rust and become as productive in other languages is frankly unbelievable to me. If they're coming from any background other than C/C++ I'm suspicious that people can even become as productive in general (which is fine, reduced productivity is in my mind one of the trade-offs you make for memory safety and increased performance when choosing Rust)

But this is Google, and the people doing self-assessments were likely influenced by the context of operating in cut-throat bureaucracy where self-aggrandisement is a requisite to career progression within the org.

Whether or not this survey was tied to any performance evaluation (and from the article it's not even clear that it wasn't) the relevant thing is whether the employees knew without a doubt that they weren't going to be compared against one another based on their self-assessment

edit: I'm curious if the people downvoting disagree with my assertion that the survey methodology is flawed, or the assertion that it's unlikely to become as competent in rust in 2 months as you would be in languages you have years of experience with.

  • joshka 3 years ago

    >The idea that people might spend 2 months learning rust and become as productive in other languages is frankly unbelievable to me.

    Upvoted even though I anecdotally disagree with your perspective based on personal experience. I wrote my first line of rust in March this year (just as a hobby), and now am one of the maintainers of a popular TUI framework (Ratatui). I feel just as productive or more than any of the previous languages I've written code in (over the last 30 something years).

    • pcthrowaway 3 years ago

      Interesting. I've been learning/using rust for work for the last 3 months.

      I'm at the point now where I'm productive (took me over a month to even get to that point), but I still feel incredibly slow compared to Typescript. The compilation time doesn't help.

      Anyway, thanks for the perspective.

      I'm still skeptical that the survey reflects honest feedback given Google's culture, but perhaps I'm just biased from how long it's been taking myself and the rest of the team to achieve a higher level of productivity

      • joshka 3 years ago

        Probably one of the biggest speed ups to your inner loop writing / running code is to use something like https://github.com/Canop/bacon/. I used a combination of the docs and GPT chats to increase my learning speed a lot.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection