Portable and Interoperable Async Rust
ncameron.orgThis is good info, and will be very useful for people porting code from other languages like Javascript. But I'm still in mourning that async took over the world.
I grew up with cooperative multitasking on Mac OS and used Apple's OpenTransport heavily in the mid-90s before Mac OS X provided sockets. Then I spent several years working on various nonblocking networking approaches like coroutines for games before the web figured out async. I went about as far down the nonblocking IO rabbit hole as anyone would dare.
But there's no there there. After I learned Unix sockets (everything is a stream, even files) it took me to a different level of abstraction where now I literally don't even think about async. I put it in the same mental bin as mutexes, locking IO, busy waiting, polling, even mutability. That's because no matter how it's structured, async code can never get away from the fact that it's a monad. The thing it's returning changes value at some point in the future, which can quickly lead to nondeterministic behavior without constant vigilance. Now maybe my terminology here is not quite right, but this concept is critical to grasp, or else determinism will be difficult to achieve.
I think a far better programming pattern is the Actor model, which is basically the Unix model and piping immutable data around. This is more similar to how Go and Erlang work, although I'm disappointed in pretty much all languages for not enforcing process separation strongly enough.
Until someone really understands everything I just said, I would be very wary of using async and would only use it for porting purposes, never for new development. I feel rather strongly that async is something that we'll be dealing with and cleaning up after for the next couple of decades, at least.
> But I'm still in mourning that async took over the world.
I agree, it does seem like a step backwards in general. However, for Rust it makes sense. There is no runtime, so there is nothing to preempt the green threads/lightweight processes etc. But yeah, with higher level languages like Python, I was disappointed to see how async was emphasized in 3.x over green threads which were already used by a number of projects.
Rust is hard at the start, but easy after. But I feel async keep it hard. The sad part is that async is SO infectious that you are forced to move all on it to align with the rest of the ecosystem.
I also believe the way all of this is presented is not the right abstraction. Actors + CSP is probably the best way. Plus, even if concurrency <> parallelism I think the parallelism idioms make more sense (pin to the "thread", do fork-joins, use ring-buffer for channels, etc).
However, I suppose the whole issue is that async as-is is easier for the mechanical support that work for the compilers and allows to squeze the performance/resource usage, that is important for Rust.
But maybe keep it hidden and surface another kind of api?
> The sad part is that async is SO infectious that you are forced to move all on it to align with the rest of the ecosystem.
That's the problem with monadic stuff in general. One solution to that might be to keep the async part on the "edge" of your programs (a bit like the functional core, imperative shell pattern or the hexagonal architecture), write all your logic without async and use async only on the edge.
Thanks for that, I wasn't familiar with hexagonal architecture.
I think there's a fundamental concept here about dealing with IO in a pure functional programming (FP). For me, stuff like monads make reasoning about IO in FP languages like Haskell really difficult.
But I haven't really encountered that difficulty with ClojureScript. It pauses and resumes endlessly alongside Javascript, and uses that stop-the-world mechanism to provide and accept data for IO without using monads. So we can write all of the pure functional ClojureScript we want, blissfully unaware that monads even exist. Whereas, other FP languages seem to think of IO as this thing that happens while your program is running, and get lost in the weeds.
Where this is important is for static analysis. Without mutability, we can take the whole syntax tree and turn it into intermediate code (I-code) and transform that tree in all kinds of fun ways with concepts from Lisp. But once we have a mutable variable, that entry/exit point of the logic has to be carried along like an imaginary number, which creates forks in the road that are more difficult to analyze because every fork doubles the analysis required, which eventually leads to an explosion of complexity that limits how far we can optimize or even understand imperative programming (IP) languages.
Now imagine an IP language like C, with its myriad of mutable variables on almost every line. If we transpiled that to an FP language, we'd see countless entry/exit points around pure functional code, with intractable complexity around the mutable state stored in the variables. To the point that it can't really be statically analyzed. Then we get excited about fractional improvements in performance, without realizing that we missed out on orders of magnitude higher gains with parallelization and other transformations that could have happened.
To me, once programmers see this, they can't really unsee it. Our whole world is built on imperative code that we just don't understand. And I am starting to feel that this mutable/monadic/async behavior (whatever we want to call it) is an anti-pattern. We should be trying to get to programming that works more like a spreadsheet, where we can play with the inputs and see the results of the logic in real time without side effects.
How does Clojure deals with asynchronous code? In OCaml, you can do IO pretty much everywhere you want, but as soon as you want asynchronous code, you have to use monadic code that will infect everything you use. It's also known as "function coloring" in JavaScript. Having one async part in your code will tend to make everything async, so the best way to tame that is to keep async stuff (which tends to be IO) at the edge of the program. Or be like Go and have preemptive multitasking with "transparent" blocking, where you can write regular code and have everything work asynchronously.
You know, I wasn't entirely sure, but after researching it, the "let" form in Clojure is a monad.
Monads are something that I keep trying to learn, but for whatever reason, the info just won't stick. After decades of doing this, my brain automatically seeks out the laziest way of doing things (while still being deterministic, testable, automatable, etc). Monads seem to be a very "hands on" way of doing FP programming, which to me defeats the whole purpose. I would probably only use them in an emergency, or to port existing functionality from an imperative language, like I mentioned.
These are the first 3 links that popped up for my Google context:
https://github.com/khinsen/monads-in-clojure/blob/master/PAR...
https://cuddly-octo-palm-tree.com/posts/2021-10-03-monads-cl...
https://functionalhuman.medium.com/functional-programing-wit...
Aspects of this do look eerily similar to async (promises/futures), like maybe monads could be implemented via nullable/optional values. I think of promises as polling a nonblocking stream result until the point in the code where the result is needed, and then blocking until the promise is fulfilled. Which is basically fork/join of threads of execution, with different syntax.
The articles mention that Haskell has syntactic support (I assume sugar) for monads. I'm nearly always against syntactic sugar and domain-specific languages (DSL) though, because they obfuscate what's really going on and double the mental load by creating two or more ways of doing the same thing. It would be fine if languages let us instantly reformat the code by transpiling with various languages features toggled (like Go's gofmt but more than just whitespace) so we could see what the syntactic sugar is doing. But nobody does anything like that, which is why I'm skeptical.
I feel like monads are one way of approaching mutability, but there are others. I'm curious how shadowing variables and even stuff like Rust's borrow checker plays into this. Like why couldn't we have a pure FP language with only immutable data and no borrow checker? That executes in its entirety when new data arrives on a queue like STDIN or a queue like STDOUT has a slot available, otherwise it blocks? I guess fundamentally, I don't understand why a spreadsheet needs scripting (written in mutable languages of all things!) or FP needs monads.
Another insight is that a monad isn't really an optional value, it's a way of executing multiple potential branches of logic. Which is similar to electrical circuits or switching at railway stations. This happens in shaders when both sides of a branch are executed, but only the outcome that matches the result of the branch is kept:
Is not that simple (in Rust?).
DB interfacing is pretty deep in the chain so you can't avoid it (without re-implement what sql do already).
OCaml is currently going through something similar, with some solutions in sight. The "motivation" part of the eio (https://github.com/ocaml-multicore/eio) documentation is a great introduction:
"The Unix library provided with OCaml uses blocking IO operations, and is not well suited to concurrent programs such as network services or interactive applications. For many years, the solution to this has been libraries such as Lwt and Async, which provide a monadic interface. These libraries allow writing code as if there were multiple threads of execution, each with their own stack, but the stacks are simulated using the heap.
The multicore version of OCaml adds support for "effects", removing the need for monadic code here. Using effects brings several advantages:
1. It's faster, because no heap allocations are needed to simulate a stack.
2. Concurrent code can be written in the same style as plain non-concurrent code.
3. Because a real stack is used, backtraces from exceptions work as expected.
4. Other features of the language (such as try ... with ...) can be used in concurrent code.
Additionally, modern operating systems provide high-performance alternatives to the old Unix select call. For example, Linux's io-uring system has applications write the operations they want to perform to a ring buffer, which Linux handles asynchronously."
How does programming with "effects" actually work? I've read the linked page, and I understand the advantages they're claiming, but I don't see any explanation of what effects actually are.
There are lots of informations in "Concurrent Programming with Effect Handlers" (https://github.com/ocamllabs/ocaml-effects-tutorial) on what it looks like and how it's implemented. There are also more recent informations on the September 2021 edition of the Multicore OCaml newsletter: https://discuss.ocaml.org/t/multicore-ocaml-september-2021-e..., with a link to a paper about effects in OCaml.
The last time [0] Effects were discussed, these recommendations were made [1],[2].
[0] https://news.ycombinator.com/item?id=28838099
[1] https://www.youtube.com/watch?v=hrBq8R_kxI0
[2] https://overreacted.io/algebraic-effects-for-the-rest-of-us/
"Wenn man nicht mehr weiter weiß, gründet man einen Arbeitskreis." - Ancient German proverb. (Translation: "If you don't know what to do next, you set up a working group")
Well Rust is always about dozens of teams, myriad committees, sub-committees , groups, sub groups, work groups, governance boards, foundations and so on. Although I don’t know they have such huge budget to run fortune 500 or government style bureaucracy. Or is that same people appear at ten different places.
Budget doesn't really have anything to do with it. The vast majority of people on a team or working group aren't paid to work on Rust; they're volunteers.
On a somewhat unrelated note: What consequences did the Rust mod team resignation lead to? The last thing I heard was the blog post: https://blog.rust-lang.org/inside-rust/2021/11/25/in-respons...
But I haven't seen any public discussions on the future of Rust governance, how to make the core team accountable, or other consequences since.
From what I've heard, they are still working things through. These things take time and we just had a holiday in the US.
The rust core team is/was never accountable. Which is fine because the language work is done elsewhere thankfully.
What about the former core team member who also raised similarly-vague alarm bells about Rust's governance, and Amazon's involvement?
I'm somewhat invested in Rust, and it's a bit worrying to see this from two places.
He is a current core team member, not former [0].
I'm not too worried about his claims. His claim was about the Foundation being shadow-controlled by Amazon (they employed the board chair who had more influence while they lacked an Executive Director). The foundation manages the assets (donations, legal marks, etc) and has no control of the actual language. For Amazon to take control, theyd have to get the board involved and then leverage the assets against the developers. Nothing like this has happened and there is now an Executive Director.
When observing that incident, I noticed that those joining the bandwagon didn't seem to have direct knowledge of the situation. Those that did, stayed quiet or made general statements about the claims not being accurate regarding why there wasn't an Executive Director yet (interim or not). To me, this suggests something happened that, professionally, people feel should be kept confidential and I try to support that by not doing further speculation.
Why? The language related work takes place elsewhere.
The term “core team” is just a misnomer.
I mean, I am pretty far from all of that and I'm not really sure how it works. It's just generally a bit worrying when you hear from a few independent sources: "don't trust (X), they're behaving badly"
To clarify then, the two groups made accusations against different people:
- A core team member made an accusation against Amazon
- The mods had concerns with the lack of oversight of the core team
I take the opposite tack. Who, precisely is clamoring for this? Why not "let 100 flowers grow" (present condition) and allow the various solutions to mature to the point that a de facto standard emerges? The claim is made: "choosing a runtime locks you into a subset of the ecosystem," to which I answer, "So, what?" If I want to log my server events or take advantage of a protocol encoding method or compress my data -- all of these and every other "big" choice I make locks me into a similar library ecosystem niche. I despise this "everything's amazing and nobody's happy" vibe. The async library authors have plenty on their plates without some sub-committee crashing in and dictating their features and release schedules.
By the way, using Go as an example is a joke since -- from the early Go bootcamp I attended in 2014, the best practice has been to use a 3rd-party http router (these days: gorilla? httprouter? chi? etc) instead of the one provided in the standard library. Instead of being _told_ what to use, let's get back to being interested enough that we read the docs, take in the reviews & benchmarks, and decide for ourselves.
The issue is that the async await keywords are part of the standard language but then you are forced to pick an non-standard runtime.
Your Go example is not quite comparable:
Go: (Lang, libs)
- You can mix and match any libraries.
Rust: (Lang, runtime, libs)
- Now you can only choose the libraries for your runtime. This dilutes the time investment of crate developers and the utility of Cargo crates, as you want a general async thing but it is tied to a specific runtime.
I think the Rust team should of included a solid zero config runtime but allow it to be replaced.
The portability is needed to let many runtimes grow without pains of fragmentation. Currently tokio dominates, and you either use tokio, or you lose access to a large portion of the ecosystem.
This doesn't have to be a blessed runtime in std, but could be just a set of common interfaces (basics like AsyncRead, sleep, and spawn), so that async crates don't have to directly depend on a specific runtime.
> By the way, using Go as an example is a joke since -- from the early Go bootcamp I attended in 2014, the best practice has been to use a 3rd-party http router (these days: gorilla? httprouter? chi? etc) instead of the one provided in the standard library.
You're mistaken. Using a third party router doesn't lock you into a particular subset of the ecosystem. E.g., I tend to use the Gorilla router by default, but I can use it with any middleware that implements the standard http.Handler interface.
A 'toy' executor is worthless for those who need to program in async and worthless for those who don't. Toy functionality has always been, and should continue to be, supplied by libraries.
Why has Rust struggled so much with this, where Go has succeeded from the start with its language-level “goroutine” concept and runtime? Maybe it just wasn’t a focal area for the original Rust designers?
>Why has Rust struggled so much with this, where Go has succeeded from the start with its language-level “goroutine” concept and runtime?
Rust made the deliberate decision to avoid the heavier Go goroutines runtime model after early alpha/beta experiments showed it conflicted with Rust's low-level design. I found 3 links to some history of that rationale in a previous comment:
https://news.ycombinator.com/item?id=28660089
And some more links:
https://stackoverflow.com/questions/29428318/why-did-rust-re...
https://github.com/rust-lang/rfcs/blob/master/text/0230-remo...
And lots of debate in this previous thread: https://news.ycombinator.com/item?id=10225903
Early on in Rust's history, it had something similar to Go's goroutines with n:m green thread scheduling and libuv for async everything. Some other languages (e.g. Haskell/GHC) also have this kind of system.
But this practically requires some kind of garbage collection and a fat runtime.
I think it was a good decision on the Rust team to abandon this and go for a low level systems programming language. Otherwise it would've been just another Go-like language that isn't really usable in low level systems programming.
Implementing portable async language features without a fat runtime or garbage collection is novel work so it's no wonder that it's taking its own sweet time to reach maturity.
The saddest part of learning Rust was discovering that there are no goroutines and that async works like Python and everything needs to be written twice to support both async and blocking styles. Like it was 20 years ago all over again and I'm still trying to mix Twisted and stdlib Python. I got all excited thinking of how the borrow checker would work so well with coroutines, only to discover it got nobbled because Rust's use cases include embedded systems and no runtime (like in a golang 9MB hello_world.exe). I have no idea if Rust could evolve its concurrent programming support to something better than Go's, even if it did drop some of its shackles.
I find traditional multi threading a pleasure in Rust, and it works really well with the borrow checker and how the type system is designed (like the Send and Sync traits).
On the semantic side, extending it to a N:M threading model like Erlang or Go would work great. But that model only seems to work well if you basically make the entire language async, which is in conflict with too many of rust's goals. So we are left with the somewhat awkward state of async as a second class citizen.
I only have rudimentary knowledge of Golang (but think the blocked/green automatic scheduling is excellent).
How does go nest aync calls?
func f() { }
func g() { }
func h() { go g() go f() }
What happens on
f()
Are the g() and f() calls inside h() blocking? Or are they async and the block happens at the point of return? Which would be the main difference to languages with an async keyword, were you need to be explicit about blocking.
The go keyword executes the called function asynchronously so g() and f() won't block h(). If you need a computed result from g() or f() then you'll need to use a channel or a shared mutex guarded value to get it. A channel is the correct default choice and the mutex should only be used if you need it for performance or other reasons.
I understood from the OP that in Golang the sync and async code would be the same - contrary to e.g. Rust were you have async/wait. Go achieves this with coroutines and the go keyword.
f()
is a sync call to the function f, and the function f used async calls inside. Somewhere then needs to be a transition from async to sync contexts (aka wait/block).
I wondered where this happens.
From your comment I assume there is a difference, in sync code I would do
x = f()
while in async code I would use
f(channel)
?
Close. If I execute say want to run some function asynchronously I use the go keyword to execute it. But I can't get a return value if I do that so I need some other mechanism to get the return value. One way is to pass a channel into the function and expect the function to return the value to me via that channel. like so:
then I can call that function asynchronouslyfunc f(ch chan[int]) { ch <- 1 // depending on the channel implementation this is a blocking action }
and later when I want the value from f I can retrieve it from the channelgo f(ch)
The net effect of the all the above is that async and non async code is highly composable. if I have a function that computes a value and I want to get that value asynchronously then I can wrap it in a function that uses a channel to get the value to me.i := <-ch // this is a blocking call
Every function is a potential asynchronous function.go func() { ch<-f() }"The net effect of the all the above is that async and non async code is highly composable."
How does this differ from Rust (Or Typescript etc.) where we would use
to block?async f() -> i32 { } fn g() { f().wait() }Thanks a lot! Can only upvote you once sadly.
It's extremely challenging to make async that is usable without introducing a garbage collector or a whole lot of runtime overhead.
A better comparison would be between the Rust and C++ paths to async - C++ also spent years designing their async system, and the end result is divisive at best.
And we are not even done yet. The Networking TS seems to have fallen out of favor and now it seems we are going to get executors + senders/receivers, which I personally think is pretty cool actually.
Wonder if the Rust team is aware of the work on the C++ side of the fence on the executors+senders/receivers approach: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2021/p230...
Not saying that is the ultimate answer to building async and/or parallel algorithms, but being aware of what others in the same space are doing is certainly useful.
True, and as the sibling comment notes it is still not fully there, yet as the vocabulary types are part of the standard library, it means what I have coded running on top of C++/WinRT, might equally work on top of HPX or cppcoro, just by changing includes and library being linked.
Go is a very different language than rust. Go has automatic memory management & garbage collection. This automatically disqualifies it from being used in many scenarios that rust is designed to support, like embedded systems.
Go’s runtime model just makes stuff like this vastly simpler. Rust can’t impose the same kind of runtime model that go has.
While Go isn't designed for embedded systems, it can run on them.
TinyGo is another Go compiler intended for embedded systems. And now its officially sponsored by Google.
TamaGo is another example, https://www.f-secure.com/en/consulting/foundry/usb-armory
TinyGo is awesome.
Lots of embedded systems are very happy using garbage collection and languages with runtimes.
Sibling comments have explained the detail. The pithier explanation perhaps is that rust is intended as a 'systems language', and it interprets that as meaning there should be no runtime. Or, more simply, it ought to be possible to call a rust function from C without providing an additional argument that encapsulates rust 'environment'. (C effectively defining this area)
Go and Java (with Loom) have these lovely facilities, but it is hard to interface with them if your language lacks these features. I find it odd that C#, javascript, and python don't provide the smoother async Go experience despite having runtimes / VMs.
Funny you should ask that... about 6 years ago, Mozilla released an event-handling library written in go named Heka. It made use of go's built-in goroutines and channels and what made it really cool was that it had an embedded lua interpreter so people could update lua scripts in their event processing systems to alter the behavior (such as reformatting dates, etc) without needing to re-compile the solution. It got pretty popular and you can still find YouTube tutorials on using it to this day.
Unfortunately, according to one of the lead developers, the system couldn't keep up with Mozilla's throughput and reliability requirements due to limitations of go's built-in features.[0] They announced they would re-write a new solution in c ("Hindsight") and they basically left an entire community of users high and dry due to not being able to salvage the go-based project, since it relied so heavily on the built-in features.
[0] https://heka.mozilla.narkive.com/9heQ11hz/state-and-future-o...
> Why has Rust struggled so much with this, where Go has succeeded from the start with its language-level “goroutine” concept and runtime?
It’s not that Rust has struggled, it was never Rust’s priority to have a runtime or high level async code. It had very different goals to Go.
It’s like asking “why has C struggled to implement Promises like JavaScript”? The languages serve different purposes.
You’re right it wasn’t their initial focal point but later on Rust wanted to offer the chance of having a go-like runtime without destabilising the low-level performance at the core level. I.E only those that use it pay for it and those that don’t use it aren’t affected.
Offering “zero cost” futures etc is very difficult to do.
See https://aturon.github.io/blog/2016/08/11/futures/ for more info, old but still relevant (including the chart)
zero cost abstraction and no gc
All design trade-offs have a cost. In this case the costs are reduced usability and async struggles.
Zero-cost abstractions is used to mean zero runtime cost (in release builds), so if an abstraction would break that it wouldn't be suitable for this goal.
Like adding a GC for the async feature.
Same could be said for C++, and yet while C++20 also doesn't have an official runtime on std (C++23 will fix that assuming executors land), the vocabulary types required for interoperability across runtimes are part of the co-routines design.
Rust also has the vocabulary types required for interoperability, such as Future and Waker. Or what exactly make C++ more interoperable?
Does it? https://book.async.rs/overview/std-and-library-futures.html
C++ ones can interoperate with any type that plugs into the compiler magic expected by the co-routines code rewrite.
If you want to deep dive into how Visual C++ does it, and how WinRT gets plugged into C++ co-routines, here is a very lengthy set of blog posts.
https://devblogs.microsoft.com/oldnewthing/20210504-01/?p=10...
Now back to Rust, how can I interoperate across tokio, async-std, smol and fuchsia-std, as easy as I can between WinRT, and others?
The std::future::Future from the rust standard library works with every runtime.
Not sure I understand, what kind of interoperability you are talking about. What kinds of code works in C++ across runtimes, for which the equivalent in Rust doesn't?
In C++ you don't have the scenario like in Rust, where one is forced to use a specific async runtime for library xyz, because it depends on having tokio as runtime.
Or has that situation been sorted out by now?
In Rust you only are forced to use a specific runtime if you want to use its API. For example to spawn new tasks, or to block on a future. I believe that would be the same in C++.
In Rust, you don't need to use a specific runtime if you just want to use async function in your library.
Go has a single runtime, Rust supports multiple async runtimes. Rust needs to support multiple runtimes because in low level space there is no one size fits all solution.
For example, Go runtime imposes unavoidable overhead in memory usage, because each goroutine must have its own allocated stack (Rust futures, on the other hand, are stackless). Rust runs on low memory platforms where Go isn't really suitable.
Rust doesn't want to impose its own one-size-fits-all runtime on all users of the language, because Rust wants to work in places where Go isn't a good fit, e.g. microcontrollers, kernels, or seamlessly on top of other languages' runtimes.
Architecture of an efficient async runtime is going to be different for 128-core server vs single-threaded chip with barely any RAM. In Rust you can write your own runtime to your needs, rather than fight overhead of a big runtime on a small device, or struggle to scale a dumb runtime to complex workloads.
I think part of it is because computer science hasn't really nailed the right abstraction for concurrent code execution.
For instance, C is a great abstraction. You take assembly language, abstract away manual management of registers with variables and pointers, add structured types to describe memory layout, standardize flow of control operations, and add functions to enable code reusability, and you have something which is very easy to work with and also to understand. It's not 100% on par with assembly in terms of performance, but it's pretty darned close, and with a little bit of practice it's very easy to look at a block of C code and basically understand what equivalent assembly it compiles to. It's a great abstraction, and it's no wonder that a vast majority of the languages which have come after it have borrowed most of its major features.
I would argue we haven't really had a "great abstraction" to the same level since then*. There have been efforts to abstract away memory management the way register management has been abstracted away, and many of them have been successful for a lot of use-cases, but not to the point that everyone can forget about memory management the way the vast majority of us can forget about register management. Garbage collectors can be too slow or too wasteful for a lot of use-cases, and you need essentially another program you didn't write to pull it off. In a GC'd language it's not so trivial to look at a block of high-level code and predict what your CPU will do. There are other approaches: like the structured approaches of Rust and Swift which are quite interesting, but they're far from proven at this point.
Similarly I think we're not quite there yet with concurrent programming. As far as the transparency topic, a lot of async implementations are more in the direction of garbage collectors, where the compiler rips apart your code and builds a state machine in its place. It's not hard to believe that the result will be difficult to work with and reason about in some cases.
And maybe the problem is that most approaches to async are trying to cram concurrent execution into that C-like abstraction, which is such an elegant abstraction for single-threaded execution exactly. Maybe concurrent programming needs to be re-thought from first principals, with different primitives involved.
*Aside: if there is another "great abstraction" on the horizon, I believe it to be ADT's. That is a feature of programming which feels like a clear step forward with no clear downsides. It's a shame that they haven't been included in Zig.
Zig has tagged unions and the corresponding switch expressions, is that not equivalent?
I was not aware of these, either it was added since I kicked the tires on Zig or I just wasn't aware of it, but yes it looks pretty good!
The one drawback I see is that it seems a bit verbose: i.e. the tag set itself has to be declared as a separate enum, and then the tags need to be repeated inside the union.
So it looks to be slightly bolted-on and unergonomic (similar to TypeScript's implementation) but I haven't worked with it so maybe I am missing something.
You can use union(enum) to avoid having to write a separate enum definition for tags and std.meta.FieldEnum() exists should you also want to derive an enum from the union later on.
Rust never struggles! It just takes its time to be perfect <3