WebAssembly adoption: Is slow and steady winning the race?
thenewstack.ioThis quote from the end of the article is good:
> With this in mind, maybe Wasm is mainstream. Spencer notes, for instance, that anyone who browses the web is likely to be interacting with WebAssembly on a daily basis.
I think that's correct. Non-Web usecases are a more complicated story (that the article focuses on), but the Web side is largely complete and successful, and that was the original purpose of Wasm. In that sense it's already succeeded.
> the Web side is largely complete and successful
Perhaps from a C++ perspective, say a monolith with kilobytes of bindings driving a canvas. Yet
> the original purpose of Wasm
includes, as per the charter, "and interoperate gracefully with JavaScript and the Web", which is far from complete and, thanks to WASI and the Component Model, hardly improving. I guess it will remain a mystery how WASI could basically replace something as obviously useful as WebIDL-bindings - that is, until someone figures out.
> Perhaps from a C++ perspective, say a monolith with kilobytes of bindings driving a canvas.
I get the desire for something more "elegant/clean" in this space, but JS is just very hard to beat on the bindings/glue side. I believe that's why approaches like WebIDL bindings did not turn out to be more efficient. I measured that on both code size and speed in Emscripten, for example, back in the day, and JS was good enough.
With WasmGC it is today possible to ship very small binaries, and the size of the bindings alongside them is generally not an issue, from what I hear from WasmGC-using toolchains like Java, Kotlin, and Dart. In fact they benefit a lot from the flexibility of those bindings, e.g. by being able to pull in random JS things (like strings support, RegExp, etc.).
Can you elaborate on why you think WASI and the Component Model is ruining that? I am only vaguely familiar with the state of WASM and squinting from this distance it seemed that WASI and the CM were meant to really improve a lot of problems. I'm not familiar with WebIDL bindings
The WASM component model looks like it wants to free us from the "C ABI" (yes I know technically there's no such a thing) and give us a "high level language agnostic ABI", but when you take a closer look it's really just a hodgepodge of base types from the Rust stdlib in a trenchcoat, so it basically replaces the "C ABI" with something that's a bit friendlier for Rust coders to work with, but that's about it (at least that's the gist, of course there's a lot more than the ABI spec, but that's also a problem of the Component Model: it lacks focus)
> the Web side is largely complete and successful
Can WASM access the DOM? Or do you still have to go through a glacially slow Javascript export shim?
I thought it was the DOM API that was glacially slow?
Would WASM directly accessing the DOM be an improvement of more than say 5%
The DOM has two different APIs, a batch and a piecewise ones. The batch one is incredibly optimized in all the browsers, while the piecewise one usually do not receive any attention at all.
Now, FFI the interface between WASM and JS is incredibly slow. That's by design. I guess the goal was to push every important API into WASM, and leave the FFI just for interfacing code, like on normal environments. People avoid problems with it by batching their data and interfacing as few times as possible, but just like on the JS DOM interface, this is a severe constrain on your code.
Calling between WASM and JS really hasn't been slow in browsers since around 2018:
https://hacks.mozilla.org/2018/10/calls-between-javascript-a...
It would be extremely surprising if calling overhead into JS to manipulate the DOM is noticeable in profiling instead of being dominated by the actual DOM manipulation (I have mainly experience with WebGL and WebGPU, where each call also goes through a JS shim, with a sometimes nontrivial amount of work happening in the JS shim, and the actual call overhead from WASM into JS is absolutely negligable compared to the overall cost, which is typically inside the WebGL and WebGPU implementation).
Also: if performance matters, don't use the DOM in the first place!
> The DOM has two different APIs, a batch and a piecewise ones. The batch one is incredibly optimized in all the browsers, while the piecewise one usually do not receive any attention at all.
What are you talking about?
I'm going to go out on a limb and guess that they mean that if you want to modify the DOM with JS, you cannot batch your updates into an atomic transaction; every modification is applied immediately. However, presumably there is something, somewhere, that can collect modifications and apply them concurrently, because this would be such an insanely obvious optimization to miss for non-JS-driven updates to the DOM (such as initial page load).
It isn't two APIs, but there is effectively some batching. The main thing to focus on is triggering layouts. Basically you want to do all of your reads together then all of your writes.
For example imagine we are trying to move one element to the location of another one. This code will be slow because the read of `leader.offsetTop` will need recalculate the layout to see if that write of `follower.style.left` changed the position of `leader`.
However the following code is fast because it can read all of the values that were calculated during the last render (for the user to see) and doesn't trigger an intermediate relayouts.let x = leader.offsetLeft; follower.style.left = `${x}px`; let y = leader.offsetTop; follower.style.top = `${y}px`;
But I am not aware of any explicit batching UI.let x = leader.offsetLeft; let y = leader.offsetTop; follower.style.left = `${x}px`; follower.style.top = `${y}px`;
Really? I can't think of a mainstream (as in used daily) site that uses WASM.
Do you have some in mind?
Sure, many major sites use it, such as Zoom, Google Sheets, and Photoshop. As the article mentions, you wouldn't know they are using Wasm under the hood unless you open the devtools and inspect the details.
Many more examples, such as those mentioned in this talk about Wasm usage in Google:
https://www.youtube.com/watch?v=2En8cj6xlv4
They mention Google Photos, Google Meet, Google Earth, TensorFlow.js, Ink, CanvasKit, Flutter, etc. And that is just inside Google - many other companies are using it, big and small, such as Figma, Unity, Adobe, etc.
https://madewithwebassembly.com/
I see a few major sites and some major plugins. The definition of "mainstream (as in used daily) site" may differ between groups.
Figma has been powered by WebAssembly for the past 7 years.
What parts of Figma though? Is it just used for cpu intensive operations like triangulation of svgs to render in webgl or is the site's core logic all done is wasm?
I'm sorry, I don't use Figma but I'm really curious about its tech stack
They have a bunch of nice blogposts about that:
https://www.figma.com/blog/webassembly-cut-figmas-load-time-...
The main part of the app is in wasm, I believe (it's in C++ that they compile).
Lichess and Chess.com use it for local position evaluation and playing against the computer (in analysis).
Yep, they compiled Stockfish to wasm. Now everyone has access to the best chess engine in the world, right there in the browser, no need to install anything. Really nifty.
uBlock Origin first used Wasm in 2018: https://www.ghacks.net/2018/12/03/ublock-origin-performance-...
> “WASI preview two is a checkpoint of stable interfaces,” Spencer said. “The instability of preview one, [which] may have scared or prevented you from using Wasm in production, is now past us.”
I wasn't aware a new version of WASI was out. I've been following it for years and its potential is still exciting.
Some quotes:
> WASI 0.2 also introduces two distinct “worlds,” which describe different kinds of hosts that WebAssembly can run on. The first of these worlds is the “command” world, resembling a traditional POSIX command-line application with access to the file system, socket and terminal.
> The second of these worlds is the “HTTP proxy,” which characterizes platforms, like Fastly, that can send and receive streaming HTTP requests and responses, serving as forward or reverse proxies. These worlds represent the beginning of a broader ecosystem, offering developers multiple avenues to explore and innovate, and more worlds will be added to WASI in the future.
The real value in Wasm is in the Web use case: bring to the most popular platform software that was impossible (or very difficult) to bring before.
Just look at Ruffle (Flash Player reimplementation), V86 (x86 virtual machine), Google Earth, ...
Wasm on the Server is a fad, and sadly it's hijacking resources and design space from the real thing. WASI being a prime example: because of the "Component model", any attempt at tighter integration with the Browser API stopped in the hope for a magical solution to all computing problems.
I have high hopes for wasm on the desktop for app plugins. Loading a whole binary DLL/SO is such a risk. VMs like Lua don't give you enough performance. VMs like Python and Java require huge effort to embed.
Once the SIMD stuff is in, it may even have enough performance for music plugins. wasm codecs would be real cool too. Nobody likes having to build and install ffmpeg. `ffmpeg-4.0.wasm`... just imagine.
> VMs like Python and Java require huge effort to embed.
As a (backend) Java dev, I couldn't let that stand. And indeed I found out java now offers a tool called jlink with first class support, that will create a system native (e.g. .dmg on my macbook) executable with the JRE embedded. The javafx example project* I found could be downloaded, built and run from its native release all within less than 5 minutes.
Mind you the Hello World example executable is 80mb and takes 6 seconds to start, but I feel like with the garbage we are used to in regards to desktop applications, this totally holds up -doesn't everything else usually bundle all of chrome? I have also read there are simple command line options to reduce the size to about 50mb.
Let's please not justify bad/bloaty plugin/extension runtimes with "at least it's not as bad as Electron".
A framework producing a Hello World application, even if it has a GUI, with a double- or triple-digit size in MB and memory consumption on the same order, is doing something very wrong.
With all due respect, that’s just a dumb conclusion. Optimizing for hello world is dumb - if the primary use case of a given framework is larger scale, than it is more than right with such a size.
Well, you are partially correct that the tool in question is optimized for larger projects. But the fact that there isn't an option (in the standard lib) for smaller projects is absolutely a problem. I think that is their point.
The example in question is very much a "include-more-than-you-need-by-default" problem. So, bloat.
Java lets you include only the parts of the JDK that you need for your app. These parts are called modules.
Unfortunately, all modules MUST include the base module (java.base), which is ~22MB. And if you add a GUI (java.desktop), that's another ~13MB. And since the linked example uses Java FX, there is yet another layer on top of that.
Java is doing a lot of work to bring that number down by a lot. More to come.
May I ask how do you solve the problem that almost none of the 3rd party libraries in Java ecosystem use modules?
Last time I checked, jlink tool refused to make use of those "automatic module" dependencies when creating an image.
That information is several years out of date. At best, the build tools in the ecosystem don't do a great job of supporting modules, but that's about it.
And your comment about `jlink` MIGHT be true if your application is nor modular/ does not use modules with a well-formed `module-info.java`. I haven't tried an automatic module in a long time. If your application is modular, everything happens as expected. I do this for all the builds for all of my projects. Last build from last night confirms that it still works, and works well.
But all of Chrome doesn't take 6 seconds to start
jlink is very much out of date. jpackage is the new way to do it. I use jpackage for all of my builds.
Pointing to the 2 most recent projects I have, my Microsoft Paint clone starts up ~0.25 secs, and my Checkers clone is the same.
You're right, embedding a JVM might actually be easy. It's more that I don't target JVMs
very good alternative is wails
6-10mb single binary
use any frontend js tech you want
automatically go functions exposed to the js side so you don't need to think about making endpoints, it makes and uses a websockets between the binary and the ui
uses WebView2 and not chrome
Sounds like Tauri, which is built on Rust instead of Go.
> By using the OS's native web renderer, the size of a Tauri app can be less than 600KB.
Well, it's possible that we can only create a serious application-aware OS security layer by forcing people to write an emulator for some random VM... But I'd expect it to be easier to go and write application-aware security on your OS.
Unfortunately I am having to support like 3 OSes.
4 if you count the Linux kernel of 2020 as distinct from the Linux kernel of 2024.
A userspace, in-process solution is very very appealing until everyone is on Linux of 2024+.
It's loading untrusted python different from loading untrusted dll/so? Unless your use some dedicated sandbox, I think they're equally insecure. I guess a bigger problem with native plugins is that they're not (usually) platform independent.
Running FFMPEG in the browser through WASM is a game changer for me.
But isn't it really slow (like 20x or so) than native FFMPEG?
No, not for me. I'm not transcoding 8k videos with it or anything like that. My use case is extracting thumbnails and video color info, basically transcoding to raw 100px x 100px clips and grabbing color info from that. The user would rather it takes as long as it takes than have to charge them a subscription fee to host the transcoding in the cloud.
Yup. WASM lacks the "domain specific" acceleration available to native code. So you miss out on any hardware codec support. Same is true for openSSL, there is a bunch of encryption acceleration in modern CPUs that WASM can't access at the moment.
I'd be very surprised. I haven't tried ffmpeg in WASM myself, but I do know that for Stockfish, a popular and very CPU-hungry chess engine, the difference is within 20-30% or so vs. an optimized native build.
> Wasm on the Server is a fad
Why? Or rather, what's the alternative?
Regular… servers?
Wasm is currently an alternative nobody asked for.
Seems like you’re jumping from “I didn’t ask for it” to “nobody asked for it”.
What if you want to run your unmodified executable on various architectures?
The JVM kind of does that, but not nearly every language can compile to that as a target, and if you want WASM-like sandboxing you need to deal with security managers, which is no fun at all (and I’ve never seen it done successfully for any non-Java software).
When I’ve tried writing programs targeting wasm (in assemblyscript and rust) and platforms for them to run on, special care had to be taken to treat each language differently (how they encode strings differently for example.)
This means that not only do I need to take special care that my code can compile to wasm, but the platform devs (also me, in this case) needs to take special care to support a variety of different design choices in various wasm toolchains.
I’d rather just use SELinux containers and let the OS handle security.
Maybe Firecracker VMs like AWS lambda does.
That's of course an option if you're fine with your deployables being architecture- and OS-dependent, and very often that's the case.
But for when it's not, I think a platform-independent and language-agnostic bytecode standard is a valuable thing to have.
In the extreme scenario where you want to run arbitrary untrusted code on arbitrary machines, that would be useful, but wasm isn’t a solution for that.
If I need to specifically support how certain languages compile to wasm (meaning I don’t support arbitrary wasm) then what’s the point?
It’s just Java applets again.
Web Assembly has (at least in the past) a business problem.
In the past in nearly all prior discussions on this the greatest proponents for Web Assembly were developers who wanted to bring their technologies into the browser because they hate JavaScript. That is a horrible business case that is more effort than any value returned and no user will ever care about.
Worse, it won't ever work if you intended it to be a JavaScript replacement because it cannot integrate into the interaction of the surrounding page, because it is a sandbox without compromise. This line of wishful thinking instills false hope and just pisses off everyone else, which slows adoption among other languages. The Web Assembly effort has been very clear about this from the very beginning, but people believe what they want to believe even after this has been clarified dozens of times.
There are absolutely valid business cases behind Web Assembly though, here are some:
* circumventing iphone restrictions
* desktop application portability
* security
* partial docker alternative
* promoting adoption and access of applications written lesser popular languages
> In the past in nearly all prior discussions on this the greatest proponents for Web Assembly were developers who wanted to bring their technologies into the browser because they hate JavaScript.
I couldn't speak for "nearly all prior discussions", though I don't doubt there were many devs excited that they could bring their favourite language into a browser.
But if I remember correctly, WASM developed pretty directly out of asm.js, which was about bringing high performance code to the web. Yes, it was compiled from C, but not because people hated JS, but because writing the asm.js subset was awful, and writing C was a better way to target the low level high performance virtual machine.
Sure, there were plenty of existing libraries in C which were leveraged (e.g. the demo of Unreal running in the browser), but "I can compile a Python interpreter into a webpage" did not seem to me to be the reason for WASM, just a frivolous side-effect.
> Worse, it won't ever work if you intended it to be a JavaScript replacement because it cannot integrate into the interaction of the surrounding page, because it is a sandbox without compromise
That is a pretty bold statement considering that Blazor actually does allow you to ditch JavaScript and very much integrates with the surrounding page.
It is true that for manipulating the DOM, Blazor internally needs to go through JavaScript as there are no direct WASM-to-DOM binding (yet).
But you, as a developer, can develop sites that runs its interactivity using webassembly instead of Javascript.
Why would someone use this for desktop applications if you aren't obligated to target a web browser? The only benefit this brings is access to browser rendering, which is a pretty terrible desktop experience (see e.g. slack, spotify).
Portability?
In some situations it is harder to support different desktop builds and this might unify that.
I don't think it is the strongest argument but at least on the margins it is an option I can see teams in some places choosing.
I think this is the strongest argument, but the sticking point is still the interface actually integrating with the OS environment well. Otherwise Java/C# would have taken off as a de-facto portable solution decades ago (and, in fact, this seems to work pretty well for stuff like games where native integration isn't a factor—most of my favorite games these days run on some sort of managed code, and in fact modding would be much more difficult without this).
Granted, at least the browser code uses native widgets, but the look/feel and layout still tend to be way off.
Not sure why you claim it's a "terrible desktop experience"? What's wrong with it, please explain.
I already answered re: slack above, but spotify is just slow, ugly, and doesn't conform to native interface expectations. Granted, this is also largely true of iTunes, but the issue is much less flagrant there.
>ugly, and doesn't conform to native interface expectations
These are not strictly a result of targeting browsers for desktop applications, it's strictly down to design decisions that have nothing to do with how its implemented. I've seen plenty of wonky desktop/native applications that took way too many liberties with design whimsy that end up being a worse user experience than any webpage.
I also can't complain about Slack being slow because it doesn't seem slow to me and I'm part of a very large organization. I also use VS Code which is based on browser tech, and it's working really well. And I love that VS Code also works in the web applications I create. YMMV.
> These are not strictly a result of targeting browsers for desktop applications, it's strictly down to design decisions that have nothing to do with how its implemented.
It's not "strictly" down to this; even if you want to implement interfaces that fit in with native ones, web browsers simply don't expose many native features via the DOM/CSS. It's a question of actual capabilities, not of this hypothetical design process that doesn't care about native integration.
(Of course, there will always be terrible native interfaces. Arguably Apple is the worst offender here!)
>even if you want to implement interfaces that fit in with native ones, web browsers simply don't expose many native features via the DOM/CSS
In an application like VS Code, or anything built on Electron or other similar frameworks, the web browser actually can have access to anything and everything a desktop application has. Electron is a fusion of a Chromium web browser with Nodejs, so you can call Nodejs functions from the browser, and if you really need OS API access, you can also have C++ addons for Nodejs that can do the work and the browser interface can be used to call those functions.
The web browser is simply the user interface, it communicates with a nodejs back-end which can definitely also access OS level APIs if you really want it to.
So yes, the capabilities are there, you just didn't know about them because you think it's simply a web browser, when it's much more than that.
And I know for a fact that you're wrong, because I worked on a desktop application in 2006 that used Internet Explorer as the user interface (Windows was our target, for reasons), and it had a C++ "back end", which was used for burning DVDs (the first legal DVD burning application). The browser could call C++ functions that we chose to expose to the browser to do all sorts of OS-level things. There was nothing that the browser front-end couldn't access at the OS level if we chose to expose it to the front-end. So I know for a fact that you're wrong about your assumptions about desktop applications that use a web browser as the front end.
Native interface expectations are only really a thing on MacOS at this point. I agree that they are often slower, but "ugly" is an opinion, one that I happen to disagree with. Native widgets feel primitive and dated to me.
> Native widgets feel primitive and dated to me.
Have you tried using a different theme?
I use MacOS so I don't think that's possible (unless you mean "dark mode"). But in any case, I am not talking about colors but general fidelity.
> Native widgets feel primitive and dated to me.
Compared to what? I honestly don't know what other people consider state of the art interfaces if native apps are excluded.
Compared to the web. I consider apps like Linear to be ones spearheading UI paradigms at the moment.
That app doesn’t look novel in any way.
Sure, if one is easily satisfied and has low expectations, it's perfectly satisfying!
Slack is terrible? In what sense?
The client was specifically designed for browsing web pages. It has all kinds of features for it. A chat client would be quite different. The file menu would load and save chats. The context menu would have chatty things in it. You have to close the application to lose any application state. We've kinda forgot how nice it is to have a desktop application. 30-35 years ago function keys where cool. I cant remember the last time I've used one. I also cant remember the last time I've used the top menu with alt keys. The browser also has to limit functionality for safety reasons.
How do you rename files?
I mean, it's slow, it uses about ~500x more memory than my irc client does, the interface doesn't meld with the rest of the OS, the options for end-users to actually customize how the app works are direly limited... I'm probably missing gripes but thankfully I haven't had to use it for a couple of years.
It's hard to list all the differences. Say, file association is quite useful.
But you can do file association on the web https://developer.mozilla.org/en-US/docs/Web/Progressive_web...
Slow, memory hog, and incredibly bloated for what it does (not much different to what IM clients did 25 years ago on 1/100 the resources).
You are not wrong, but it solves another really hard business problem: Hiring and retaining good engineers.
But Blazor seems to do a fine job replacing JS with C#?
If you really want go to all-native JS because it has advantages, sure you can do. But what I see are bloated SPAs written in a language not designed for large software, based on an environment that likes to change and break things. It's abstractions upon abstractions get a hold of this mess.
I don't see why WASM couldn't compete.
Blazor is doing a fine job being a home for WebForms and Silverlight refugees, how far it will stay relevant remains to be seen.
On my job I have zero reasons to suggest it, given the split between FE and BE teams.
Organisations always move slower than technologies. And if Microservies show anything it is that sometimes you just need a technical solution for a human problem.
I understand your lack of desire to touch anything remotely resembling frontend. Yet your hostility towards Blazor perplexes me, as for me it is the hot garbage that is JS is a primary reason.
Let's say I use Microsoft technologies since MS-DOS 3.3, and Blazor isn't the first great thing that eventually goes south.
I belong to the group of people that knows Web Forms before .NET 1.0, went through Silverlight, XNA, WinRT, UAP, UWP,...
One thing that Web Forms and Silverlight taught me, is that working directly with browser tech, regardless how bad it may be, is much better than debugging framework and VS interoperability code.
For the same reason I master C, regardless of my opinion on it, and its bad influences in security, I have a much easier time on UNIX clones, than otherwise would be.
Are there any good Blazor websites out there?
I'm not sure if I've ever encountered one in the wild, but to be fair I wasn't really looking.
Blazor is peak corporate software. It's fine for internal apps but a nightmare for users that aren't forced to use something (ie, not internal apps). Blazor WASM in particular is the slowest web framework around, the layout is always broke and even on websites that are created to showcase blaze it absolutely is riddled with layout bugs the moment you use it on mobile.
React and other JS native frameworks, as much as they are looked down at from back end (especially dotnet devs) are miles ahead in terms of user experience and even actual development experience (good luck doing more than forms and basic stuff on blazor wasm for example).
It's for c#/dotnet devs that have a single hammer and hate learning new things so they will use csharp everywhere.
As to your point, can you show me a single user facing, non demo app or blazor related website that actually runs on blazor (especially wasm blazor) ? I'm curious to see why you think it's actually replacing or capable of replacing js on the front end.
I really don't see much of a value outside the browser, other than an avenue for startups to resell yet another bytecode based platform, as if we haven't had enough since UNCOL (1958).
The ongoing attempts to bring back application servers, but with Kubernetes, WASM and plenty of YAML, is a kind of tragic irony.
And on the browser, if one needs performance it is better served with GPU code than WebAssembly, other than bringing existing libraries into the browser.
At least we got the revenge of plugins, Flash, ActiveX and Java applets, running back on the browser thanks to WebAssembly based implementations.
> if one needs performance it is better served with GPU code than WebAssembly
What? No! Would you suggest replacing e.g. the Linux kernel (parts of which are very performance critical) with GPU code too?
WebGPU is great for, well, GPU-like compute! That's not nearly all performance-critical compute. One example: Stockfish, a top chess engine, is exclusively CPU-based, and runs perfectly in WASM.
> At least we got the revenge of plugins, Flash, ActiveX and Java applets, running back on the browser thanks to WebAssembly based implementations.
Couldn't disagree more. Flash, ActiveX and Java were unpopular because they all came with their own weird/non-native-feeling GUI toolkits and UI paradigms (e.g. breaking right clicks and text selection, Ctrl+F etc.). Flash and ActiveX were also closed source and only available on some platforms. ActiveX was also very badly sandboxed on top of all of that.
WASM is none of that. The only thing you'd notice e.g. about a site bringing its own media codec, game engine etc. in WASM instead of JavaScript is better performance.
Linux kernel isn't supposed to run in the browser.
Indeed WASM is nothing like yet bytecode being capitalised by startups, in search of the next Java goldmine, by folks that enjoy bashing about it. /s
App servers bad, Kubernetes with WASM, "oh boy that is soooo cool!".
I think many people like the idea of bytecode, but not the implementation of Java/the JVM (especially if they have an existing codebase or don't want to start a new one in Java), and I think that's pretty fair.
"Write once, run everywhere" is pretty important goal to achieve. When a project has to be written for a platform, and then ported to other platforms, then some ports or all of them turn out to be pretty bad.
Essentially the program runs only on one platform well. The ports also may turn out to be a significant burden depending on the size of the project, which usually grows over time.
Take Adobe as an example. It's port of Linux was pretty much unusable, only many years later to be revealed that only 2 programmers were responsible for maintaining the codebase.
And just like multiple times since 1958's UNCOL, WASM write once run anywhere will reveal itself as not really true, specially when going beyond bare bone computations, using multiple runtimes and competing WASI implementations, that will never make it to the browser, and are cloud vendor specific.
> “While obviously C/C++ and Rust came to the ecosystem early on, other languages seemed to slowly put a ‘toe in the water’, first having limited support and then gradually investing more as usage takes off.”
This is the biggest limitation for me. I like WASM fine, but anytime I want to use it I have to turn back to Rust. I like Rust fine, but without support in other languages, it's not necessarily worth diving into.
The good news in that area is that WasmGC (shipping in Chrome and Firefox today) has allowed very good Wasm support to be shipped for many more languages, including Java, Kotlin, Dart, Scheme, OCaml, and more.
What about dotnet? Irc it was even there before rust?
The first .NET Blazor "webassembly" was the .NET runtime ported to Webassembly, but where developer code would actually be the same type of IL (Microsoft's Intermediate Labguage) as .NET runs on Windows and Linux.
Unlike those platforms however, the IL was actually interpreted by the ported .NET runtime. Only the runtime was actually running as WASM.
To this day this is still the default, but now you can use a compiler to compile from IL to WASM, and then run full WASM code in the browser. This toolchain is a bit slower on build, but the code will run much faster.
Just to add to this answer, here's [1] how to do .NET with AOT compilation, without needing Blazor.
[1] https://devblogs.microsoft.com/dotnet/use-net-7-from-any-jav...
Such an app needs to download a megabyte of runtime. This isn't tiny. Yet, for corporate apps I think it's okay? Apps start reasonably quickly, although not instantly, maybe 2 seconds.
It’s just not very useful as you can’t interface with the external world much (yet). System calls are still very limited and even things like opening a UDP socket are highly experimental. So I’m excited for it but there’s a ton of stuff missing to actually make it useful for most real world use cases.
WASM doesn't interest me for my development because it comes with what I consider large downsides without giving me any substantial benefits.
That's not to say that I don't see any benefit in it. I can see use cases for it, they just aren't use cases that I have.
I haven't been keeping up, can I write webpages that replace all javascript with wasm yet?
The assumption is that javascript is bad and that the solution is to not fix it. asm.js was another example.
asm.js was the opposite of an alternative to JavaScript: It was a performant JavaScript dialect used as a compilation target for various languages otherwise targeting emscripten.
WASM is the spiritual successor to that, getting rid of the (in retrospect quite hilarious) "embedded backwards compatibility layer".
You don't compile to js because js is slow.
edit: for the record, I think having only one doc type (html) and only one scripting language (js) in the browser is a terrible idea.
I don't know what the other doc types and programming languages should be but performance alone seems like a poor idea. A compile target seems even worse. :)