WorkerDom – The Same DOM API and Frameworks You Know, but in a Web Worker
github.comI worked on a similar project a few years ago: https://github.com/canjs/worker-render. It could render jQuery apps in a web worker, it was pretty neat.
The problems I ran into, and I suspect this project will run into many of the same things were:
1. Events are synchronous so to make them async you have to preventDefault them, send to the worker, and then send back and re-dispatch them (if the worker didn't preventDefault). This works ok for many events like 'click' but not at all for things like touch events.
2. The debugging experience in the worker is not very good. Things you take for granted like being able to do `document.querySelector('.app')` and get a pretty-printed DOM object in the debugger do not work when that object is a fake DOM-like object.
3. There are a lot of DOM APIs and they continuously grow. Trying to implement everything is impossible. Many things can't (like events described above) be implemented 1 to 1. So it's a constant game of whack-a-mole.
Of these problems #2 is the biggest though. Devtools has a lot of really nice integrations with the DOM that you just lose when you're not using the actual DOM. So users have to decide if the tradeoffs are worth it. AMP might have an advantage in this regard, since you literally have no other choice; use their worker DOM or you can't run your app at all.
When I decided to take a stab at something similar, I settled on a different approach in taking an existing asynchronous UI architecture, React Native, instead of just trying to proxy the DOM: https://github.com/vincentriemer/react-native-dom
I think it works better because the original framework wrestles with the same limitations you mention (though an upcoming rearchitecture doesn't), and while it is by no means perfect, some early results are promising: https://rndom-movie-demo.now.sh
So if I'm reading it right, it looks like it's trying to replicate the DOM interface almost exactly (but in a web worker)! Which means that any UI library should be able to plug into this system and run mostly off the main thread for most of the application's life, bringing JS to parity with a lot of "native" development where you do most of your work on a "main" thread, and only touch the UI thread when you want to do UI things!
It also reads like it might have some kind of virtual-dom implementation to kind of optimize the actual renders needed in the UI thread? (although I'm not very sure about this part).
But this looks incredible from a first glance!
doesn't the added vdom lead to redundancy?
No, in fact it's likely very beneficial. Virtual DOM is meant to avoid the overhead of interacting with the real DOM (reading and writing to the DOM is slow). Being inside a web worker means that you're still interacting through the DOM, just via a web worker now. So the cost of reading and writing to and from the DOM is still present, plus there's the added overhead of communicating with the UI thread. So vdom in this case makes a meaningful difference by allowing the thread to get more work done in a shorter amount of time.
If you, for instance, blew away the whole DOM and re-rendered it (e.g., setting `innerHTML`) on every state change, the cost is going to be far higher than just tweaking the few things that might need tweaking, like on a keypress. If you're not keeping a virtual DOM around, there's no way to know what the diff is between your new and previous state to be able to make those tweaks.
I know how VDOMs work. But the WorkerDOM has an extra one to the one current frameworks bring.
So now there are two VDOMs
WorkerDOM has a DOM representation, but I wouldn't call it a "virtual DOM". It's just propagating mutations back and forth. It wouldn't, for instance, have the ability to reorder nodes in a list in linear time (like React does with the key={} prop). When a change takes place, it's not diffing anything, it's just mapping one representation of the DOM onto another. If you set innerHTML in the worker, it would set innerHTML in the main thread also.
Reading and writing is not slow, but your queries. If you don't cache or optimise your queries then any operation is going to be slow. With a VDOM you are just exchanging queries for cached DOM objects, but if you don't optimise your updates on such they are bound to be slow as any other. Also, a VDOM is immutable and prone to garbage.
React, Vue and other modern frontend frameworks already rely on a vdom implementation.
That's what I meant with redundancy.
I see. Well, I was thinking this project might evolve in a direction that React etc can offload VDOM computations to a worker thread.
WebWorkers: Same great JavaScript, now with concurrency bugs!
Maybe your new at this but at this point, I've learned not to laugh. SPAs, Node.js, VDOM, transpilers, and many other things sounded absurd at one point.
Just treat a web worker like you would a remote REST API.
Except it’s actually local, and shouldn’t deal with sensitive data.
There's no shared memory between WebWorkers and the main thread. Communication happens through message passing, and zero-copy messages are also available.
Shit talking JavaScript is pretty tired. Keep it to Reddit.
If this is exposing DOM operations in a Worker and then replicating them back to the real DOM in the main thread, what happens if 2 Workers try to make incompatible changes to the DOM at the same time?
I imagine one of these DOM is "fake", i.e. it doesn't really have access to the UI. After all, the DOM is a generic API to work with structured node based documents. Changes to the DOM would still need to be serialized in order to be sent to the UI thread.
Yes I know. Which is why I asked what happens if two Workers try to make incompatible changes at the same time. How does this library resolve conflicts when applying the changes from the fake DOM to the real one?
In fact, this doesn't even require 2 Workers, you could get a conflict with a single Worker if you're also making DOM changes on the main thread.
Without knowing the specifics of this project, more than likely they do not attempt to solve the problem you describe. Simply don't make changes to the real DOM, or use 2 workers on the same subset of actual DOM. Each worker owns its own DOM (most likely the entire page).
This is like asking what happens if two different micro-services try to make incompatible changes to the same database.
Or what happens when two different users try to make incompatible changes to the same Google Doc?
Or what happens when you try to GIT Merge a commit that has incompatible changes?
It’s your code, so it’s in your control what happens, and also in your control how to react to what you can’t control.
You can write code that avoids conflicting manipulations (let’s say all the dom manipulations only ever happen on one thread) or you can write code that handles the case... It’s a case by case kind of thing, there is no one answer.
All of these cases have a well-defined conflict resolution mechanism. All of these cases except for Google Docs explicitly have a way for the client to be notified that their requested change failed (I believe Google Docs just defines the editing operations such that a conflict can always be automatically resolved).
The DOM API does not have any way to even notify the client that their DOM manipulation failed.
I would like to imagine that this interfaces with a Display Locking[1] shim in order to batch DOM updates in order. In all likelyhood this isn't true and there are probably a few race conditions.
1. https://github.com/chrishtr/display-locking/blob/master/expl...
Very much appreciate the work
What's AMPs goal here, is it to have untrusted third parties write their own UI's, but still run that whole site on the google domain? I hate to be cynical but if so that's the same old walled garden approach to further centralize the web
It would be far easier to just rate the page performance specs while indexing, and provide a free google CDN for heavy assets to protect privacy so they can preload most of them
This does look exciting for things like embedding small components from untrusted third parties into the middle of an article or page though
Wow, the contortions we go through trying to make HTML into a full-blown native application platform, instead of writing native applications.
I'm impressed, but good god do we spend a lot of time reinventing the wheel within the limitations of browser engines.
You're probably right, but there are no alternates, so people build what they want with what's available.
I'm no expert in native or web development, but I don't know of any native GUI framework that looks as good as browser rendering does, is free, is easy to distribute, and secure.
Like QML from Qt?
Native applications can be rejected from app stores, have their revenue siphoned off 30% at a time, and have a high barrier of entry. It's completely reasonable for an alternative ecosystem to evolve, with a different set of tradeoffs.
We used to firmly believe in what you said too. However, we've seen advantages to the open web:
1. You're not beholden to the app stores' whims.
2. It literally is "write once, run everywhere"
3. Webassembly & WebGPU add performance & language independence
For most cases, the web is a viable, nay superior alternative.
For cases with special sensor access & ultra high speed (rich games, Video encoding) native apps are the only way.
It's true. I'm grateful though whenever I need to use a computer that's not mine and need access to stuff.
And the owner of the computer your using is probably grateful you simply went to a website rather than installing some unknown executable on his computer.
I'm assuming you've never had to write a cross-platform native app...
Assumption is a mother of all fuc$ups...
Serious question: Could not a similar thing be implemented by running the "main thread" in an iframe using the browser's own DOM and the postMessage API? It might lack the optimizations of a virtual DOM, but would probably require a lot less code to achieve.
Any idea when the jsconf us videos are planned for release? This work was apparently presented there. [1]
[1]: https://speakerdeck.com/cramforce/workerdom-javascript-concu...
What are the performance implications of this? Does the proxy employ a VDOM cache? How does this compare to running the DOM manipulations natively inside the main thread?
I don't like this. It must be slow and bloated because it is a DOM written in JS rather than native optimized DOM implementation. And why would anyone need that?
Correct me if I'm wrong, but if you work with DOM, modify/read properties, they lead to DOM trashing and re-rendering. If you do this 100 times per event handler, it will become slow.
If you do all that work in a worker and then only update the real DOM once, it will be much faster.
Render/paint is the slowest part in a JS application.
Unfortunately, in my current app, the actual issuing of DOM updates seems to be what's blocking things. Inserting large tables takes some time.
That's where stuff like batch inserting the elements (split your table up into groups of 100 elements, and add one every so often to keep the UI smooth), and virtual-rendering (only show the part of the list that is actually visible) are going to be the only real solution.
Oh, and making your DOM simpler if possible, but normally there's not a ton of gains to be had there in my experience.
There's a lot of tricks you can try, but even rendering the first set of visible rows can take more than a 60fps time window will allow.
Don't pick a fixed value for the batch size; render until your time is exhausted and then wait for the next batch. Yes, this'll be slower overall, but you ensure that the app stays responsive regardless of device or CPU availability.
That seems like it would cause a lot of shifting content as the table gets repeatedly resized due to adding rows.
The only way I see that working out nicely is if you can know the height of the table in advance, which is generally not the case.
Are you recommending an app imposed rendering timer (say 15ms) or is there an in built API for Opportunistic scheduling of Dom updates?
You might want to check out this thread from last week[0] about scheduling tasks when the event loop is idle. The author also published a library for it[1]
my only concern with this is...with a web worker, my understanding was that any dependencies to the web worker would be loaded separately from the actual UI thread, which means you could potentially be loading your javascript twice if you have the same code being used in the UI thread as your web worker thread.
At least, that is how it worked a few years ago.
Link to blog post: https://amphtml.wordpress.com/2018/08/21/workerdom/
tldr; aim is to bring scripting to AMP pages
I don't quite understand how busy your page must be if you feel that you're held back by the performance of one thread with the DOM.
You know, DOM isn't really for video games or realtime data visualization.
I think this is more useful from the perspective of, if you have some busy task and you need to get the data back into the DOM, what can you do? You either need to build a messaging layer, which was the defacto way, or use this. I have a few projects that could benefit from this, being able to put a progress bar in the DOM and have the worker update it in a clean API is pretty neat.
Often though, a better pattern for that is to have your "compute thread" not know anything about the UI - it just sends back packets of result data, or serialized model updates, which the view-model layer back in the "UI thread" uses to rerender.
If React's virtual DOM diffing is epxensive enough to be a bottleneck in some applications though, I could see an advantage to moving that off thread...
There's been some experiments with moving React's logic into a web worker, but the main one I know of was a couple years back : http://blog.nparashuram.com/2016/02/using-webworkers-to-make... . Would be interesting to try updating that.
I don't see how that's different from having 'cooperative multitasking' in the JS thread unless you are bound by DOM. You have to send data to the UI thread once in a while? Well why can't you let the UI update in the same interruptions? Of course, that will introduce delays, but if they are noticeable to you then maybe you have a DOM tree that's too heavy in the first place?
I'm not saying having the lib is bad, but I don't see a justified use-case for it.
I could see this being used in a "pre-fetch" manner for single-page applications. For example, preparing and rendering the next page or component before the user has opened it. So when the user does click to open the next page or component, it can be displayed immediately.
The ability to perform that rendering in the background without blocking the main thread would offer some real value.
But this will consume CPU and RAM and will make the page user is browsing work slower. Especially for users with low-end hardware.
If you use server-side rendering (for example, if you use PHP on your server) then you won't need all of this and pages will load instantly without any complicated preloading.
> But this will consume CPU and RAM and will make the page user is browsing work slower.
Not necessarily -- Web Workers enable you to take advantage multiple CPU cores. You could, potentially, offload some of the rendering work to a CPU core that would otherwise go unused.
But I agree, if you went to the extreme case, pre-rendered everything the user could click on and utilized all CPU cores, then you'd definitely create a performance problem for yourself.
I don't think it's about using the DOM for anything real-time. The unfortunate truth of JavaScript (obviously when not running in a worker) is that it blocks the UI. WorkerDOM appears to be an attempt to allow you to run existing JavaScript without blocking the UI, which seems like a great idea to me.
It's for guaranteed responsiveness
Angular can do this out of the box https://blog.angularindepth.com/angular-with-web-workers-ste...