A proposal to add signals to JavaScript
github.comAm I the only one that thinks the vanilla js example is actually easier to read and work with?
- "The setup is noise and boilerplate heavy." Actually the signals example looks just as noisy and boilerplate heavy to me. And it introduces new boilerplate concepts which are hard for beginners to understand.
- "If the counter changes but parity does not (e.g. counter goes from 2 to 4), then we do unnecessary computation of the parity and unnecessary rendering." - Sounds like they want premature memoization.
- "What if another part of our UI just wants to render when the counter updates?" Then I agree the strawman example is probably not what you want. At that point you might want to handle the state using signals, event handling, central state store (e.g. redux-like tools), or some other method. I think this is also what they meant by "The counter state is tightly coupled to the rendering system."? Some of this document feels a little repetitive.
- "What if another part of our UI is dependent on isEven or parity alone?" Sure, you could change your entire approach because of this if that's a really central part of your app, but most often it's not. And "The render function, which is only dependent on parity must instead "know" that it actually needs to subscribe to counter." is often not an unreasonable obligation. I mean, that's one of the nice things about pure computed functions- it's easy to spot their inputs.
Why do you think this is premature memoization? This is an example, boiled down to a simple function. Do you think people just came up with the use case for this without ever having needed it?
I think an effort in standardizing signals, a concept that is increasingly used in UI development is a laudable effort. I don't want to get into the nitty gritty about what is too much boilerplate and whether you should build an event system or not, but since signals are something that is used in a variety of frameworks, there might be a good reason to it? And why not make an effort and standardize them over time?
> a concept that is increasingly used in UI development
For a desktop app developer that's a pretty funny statement, given that the Qt framework introduced signals and slots in the mid 90s.
I am curious how many web devs think that signals are a new concept. (I don't necessarily mean the parent poster.)
While they share the same name, and are both reactive primitives, there are some fairly key differences between these signals and the QT signals and slots mechanism.
The main one is that QT signals are, as far as I understand, a fairly static construct - as you construct the various components of the application, you also construct the reactive graph. This graph might be updated over time, but usually when components are mounted and unmounted. JS signals, however, are built fresh every time they are executed, which makes them much more dynamic.
In addition, dependencies in JS signals are automatic rather than needing to be explicitly defined. There's no need to call a function like connect, addEventListener, or subscribe, you just call the original signal within the context of a computation, and the computation will subscribe to that signal.
Thirdly, in JS signals, you don't necessarily need to have a signal object to be able to subscribe to that signal. You can build an abstraction that doesn't necessarily expose the signal value itself, and instead provides getter functions that may call the underlying signal getter. And this same abstraction can be used both inside and outside of other reactive computations.
So on the one hand, yes, JS signals are just another reactivity tool and therefore will share features with many existing tools like signals and slots, observables, event emitters, and so on. But within that space, they are also a meaningful difference in how that reactivity occurs and is used.
Thanks for the great reply! I definitely need to take a closer look.
This is an interesting topic so I tried to dive in a bit.
From my reading I understood that Qt signals & slots (and Qt events) are much more closely related to JavaScript events (native and custom).
In both you can explicitly emit, handle, listen to events/signals. JavaScript events seem to combine both Qt signals & slots and Qt events. Of course without the type safety.
For example, taken from https://doc.qt.io/qt-6/signalsandslots.html
"Signals are emitted by objects when they change their state in a way that may be interesting to other objects."
However what I think they are proposing in the article is a much more complex abstraction: they want to automate it so that whenever any part of a complex graph of states changes, every piece of code depending on that specific state gets notified, without the programmer explicitly writing code to notify other pieces of code, or doing connect() or addEventListener() etc.
What are your thoughts on that? I'd be interested to hear since I'm sure you have more experience than me.
This sounds interesting. The code examples reminded me of Qt signals but all the answers to my post suggest that JS signals would be much more powerful. Honestly, I'd need to take a closer look.
JS signals come from functional reactive programming, which is a generalization of synchronous reactive programming from the Lustre and Esterel programming languages from the 80s and 90s. I believe the first version was FrTime published in 2004.
You can think of reactive signals as combining an underlying event system with value construction, ultimately defining an object graph that updates itself whenever any of the parameters used to construct it change. You can think of this graph like an electronic circuit with multiple inputs and outputs, and like a circuit, the outputs update whenever inputs change.
> I don't want to get into the nitty gritty about what is too much boilerplate and whether you should build an event system or not
You're basically saying you want this thing, but you don't want to have to justify it
The rationale for it is the fact that multiple frameworks provide their own versions of this mechanism. The proposal is to relocate extremely popular and common functionality from framework space to the language/runtime space. The popularity of React is itself the rationale for the utility of this idea, and any terse version of the rationale is for show. Is that a good enough rationale? Maybe, maybe not, but you are shooting the messenger.
Most importantly: OP is right re: vanilla example is most legible. Reading the proposal, I have no idea what this "Signal" word adds other than complexity.
Less important: I really, really, really, really, am reluctant to consider that is something that needs standardizing.
Disclaimer: I don't have 100% context if this concept is _really_ the same across all these frameworks.
But frankly, I doubt it, if it was that similar, why are there at least a dozen frameworks with their own version?*
Also, I've lived through React, Redux, effects, and so on becoming Fundamentally Necessary, until they're not. Usually when it actually is fundamental you can smell it outside of JS as well. (ex. promises <=> futures). I've seen 1000 Rx frameworks come into style and go out of style, from JS to Objective-C to Kotlin to Dart. Let them live vibrant lives, don't tie them to the browser.
* I know that's begging the question, put more complex: if they are that similar and that set in stone that its at a good point to codify, why are there enough differences between them to enable a dozen different frameworks that are actively used?
> Disclaimer: I don't have 100% context if this concept is _really_ the same across all these frameworks.
Very nearly[1] every current framework now has a similar concept, all with the same general foundation: some unit of atomic state, some mechanism to subscribe to its state changes by reading it in a tracking context, and some internal logic to notify those subscriptions when the state is written. They all have a varied set of related abstractions that build upon those fundamental concepts, which…
> But frankly, I doubt it, if it was that similar, why are there at least a dozen frameworks with their own version?*
… is part of what distinguishes each such framework. Another part is that state management and derived computations are only part of what any of the frameworks do. They all have, beyond their diverse set of complementary reactive abstractions, also their own varied takes on templating, rendering models, data fetching, routing, composition, integration with other tools and systems.
Moreover, this foundational similarity between the frameworks is relatively recent. It’s a convergence around a successful set of basic abstractions which in many ways comes from each framework learning from the others. And that convergence is so pervasive that it’s motivating the standardization effort.
This especially stands out because the reference polyfill is derived from Angular’s implementation, which only very recently embraced the concept. From reading the PR notes, the implementation has only minor changes to satisfy the proposed spec. That’s because Angular’s own implementation, being so recent, internalizes many lessons learned from prior art which also inform the thinking behind the spec itself.
This is very much like the analogy to Promises, which saw a similar sea change in convergence around a set of basic foundational concepts after years of competing approaches eventually drifting in that same direction.
[1]: Most notably, React is unique in that it has largely avoided signals while many frameworks inspired by it have gravitated towards them.
what makes useState different than signals?!
Explicit vs implicit dependencies (useEffect vs Signal.Computed/effect) and the fact that signals in contrast to useState can be used outside of react context which I assume is a good thing.
I personally mostly prefer more explicit handling of "observable values" where function signatures show which signals/observables are used inside them.
They’re very similar, and you can definitely squint right to see them as fundamentally the same concept… if while squinting you also see a React component itself as a reactive effect. Which is all technically correct (the best kind), but generally not what people mean when they’re talking about signals in practical terms.
Signals are fine grained reactivity. React is coarse grained reactivity. Legend-state adds signals to React and I'd recommend it over Redux/zustand which we used to use.
> why are there enough differences between them to enable a dozen different frameworks that are actively used?
Because they are not in the standard library of the language? Because they all arrived at the solution at different times and had to adapt the solution to the various idiosyncratic ways of each library? Because this happen in each and every language: people have similar, but different solutions until they are built into the language/standard library?
> Most importantly: OP is right re: vanilla example is most legible. Reading the proposal, I have no idea what this "Signal" word adds other than complexity.
The aim is to run computations or side effects only when the values they depend on change.
This is a perfectly normal scenario and you don't want to update all data the UI of a full application tree whenever something changes.
DOM updates are the most popular example but it could really be anything.
Of course in simple examples (e.g. this counter) you might not care about recomputing every value and recreating every part of the DOM (apart from issues with focus and other details).
But in general, some form of this logic is needed by every JS-heavy reactive web app.
Regardless of the implementation, when it comes to that, I'm not sure I see the benefit of building this into the language either.
>Usually when it actually is fundamental you can smell it outside of JS as well. (ex. promises <=> futures).
Excellent criterion
> But frankly, I doubt it, if it was that similar, why are there at least a dozen frameworks with their own version?*
Welcome to the fashion cycle that is JavaScript. Given a few years, every old concept gets reinvented and then you have half a dozen frameworks that are basically the same but sufficiently different so that you have to relearn the APIs. This is what I think standardization helps circumvent
A good standard library prevents fragmentation on ideas that are good enough to keep getting reinvented
> But frankly, I doubt it, if it was that similar, why are there at least a dozen frameworks with their own version?*
To answer this specifically: signals are a relatively low-level part of most frameworks. Once you've got signals, there are still plenty of other decisions to make as to how a specific framework works that differentiate one framework from another. For example:
* Different frameworks expose the underlying mechanism of signals in different ways. SolidJS explicitly separates out the read and write parts of a signal in order to encourage one-way data flow, whereas Vue exposes signals as a mutable object using proxies to give a more conventional, imperative API.
* Different frameworks will tie signals to different parts of the rendering process. For example, typically, signals have been used to decide when you rerender a component - Vue and Preact (mostly) work like this. That way, you still have render functions and a vdom of some description. On the other hand frameworks like SolidJS and Svelte use a compiler to tie signal updates directly to instructions to update parts of the DOM.
* Different frameworks make different choices about what additional features are included in the framework, completely outside of the signal mechanism. Angular brings its own services and DI mechanism, Vue bundles a tool for isolating component styles, SolidJS strips most parts away but is designed to produce very efficient code, etc.
So in total, even if all of the frameworks shared the same signals mechanism, they'd all still behave very differently and offer very different approaches to using them.
As to why different frameworks use different implementations as opposed to standardising on a single library, as I understand it this has a lot to do with how signals are currently often tied to the component lifestyle of different frameworks. Because signals require circular references, it's very difficult to build them in such a way that they will be garbage collected at the right time, at least in Javascript. A lot of frameworks therefore tie the listener lifecycle to the lifecycle of the components themselves, which means that the listeners can be destroyed when the component is no longer in use. This requires signals to typically be relatively deeply integrated into the framework.
They reference this a bit in the proposal, and mention both the GC side of things (which is easier to fix if you're adding a new primitive directly to the engine), and providing lots of hooks to make it possible to tie subscriptions to the component lifecycle. So I suspect they're thinking about this issue, although I also suspect it'll be a fairly hard problem.
Fwiw, as someone who has worked a lot with signals, I am also somewhat sceptical of this proposal. Signals are very powerful and useful, but I'm not sure if they, by themselves, represent enough of a fundamental mechanism to be worth embedding into the language.
> ...but since signals are something that is used in a variety of frameworks...
...common usage is not really a justification for putting it into the language standard though. Glancing over the readme I'm not seeing anything that would require changes to the language syntax and can't be implemented in a regular 3rd-party library.
In a couple of years, another fancy technique will make the rounds and make signals look stupid, and then we are left with more legacy baggage in the language that can't be removed because of backwards compatibility (let C++ be a warning).
From what I understand, a few/many of the big frameworks are converging on signals, and another commenter said that Qt had signals in the 90s https://news.ycombinator.com/item?id=39891883. I understand your worries, and I would appreciate some wisdom from non-JS UI people, espcially if they have 20+ years of experience with them.
Every framework is moving to signals, apart from React and I'd say if this became a standard even they will. This is like Promise. It's a sensible shared concept.
I agree. But look at Preact's signal documentation -
https://preactjs.com/guide/v10/signals
"In Preact, when a signal is passed down through a tree as props or context, we're only passing around references to the signal. The signal can be updated without re-rendering any components, since components see the signal and not its value. This lets us skip all of the expensive rendering work and jump immediately to any components in the tree that actually access the signal's .value property."
"Signals have a second important characteristic, which is that they track when their value is accessed and when it is updated. In Preact, accessing a signal's .value property from within a component automatically re-renders the component when that signal's value changes."
I think it makes a lot more sense in a context like that.
>> In Preact, when a signal is passed down through a tree as props or context,
I have found that passing props makes React-like applications very complex and messy and props are to be avoided as must as practical.
The mechanism for avoiding props is Custom events.
It concerns me to see the concept of signals being passed as props when surely signals/events should be removing the need for props?
You don’t need to pass a Preact signal as a prop to get reactivity. If you’re using Preact, signal references will make your component reactive by default, and if you’re using React you can introduce reactivity by way of the useSignals hook or a Babel plugin. (1)
React signals have become my go to state management tool. So easy to use and very flexible.
>> React signals have become my go to state management tool.
I've ditched almost all state in my React apps except state local to the component.
Custom events do all the work for passing information around the application and directing activity.
What do signals give me that events do not?
I’m also a fan of local state, but there are some cases where it makes sense for a bit of global state - mainly user context.
However you can use signals for local state as well and they work amazingly. Being able to assign a new value to a signal without having to go though a setter is a way cleaner pattern, in my opinion.
The other use cause is for communication between micro frontends. It’s so nice to just be able to import/export a signal and get its reactivity. Before them, I would create a pub/sub pattern and that’s just not as clean.
Since reactivity is not baked into Javascript. Adding reactivity is going to add abstraction overhead. It's meant to be used if it's needed. Not necessarily a default way to work with state.
In my experience, the big benefit is the ability to make reactive state modular. In an imperative style, additional state is needed to track changes. Modularity is achieved using abstraction. Only use when needed.
> Sounds like they want premature memoization
It's a balance to present a simple example that is applicable. Cases where reactivity have a clear benefit tend to be more complex examples. Which is more difficult to demonstrate than a simple, less applicable example.
I think there is room for improvement in how we explain this. The problems aren’t really visible in this small sample and comes up more for bigger things. PRs welcome.
Perhaps mentioning the tradeoffs between a simple easy to explain example vs a more obvious comprehensive example. With links to more complex code bases? With a before & after?
I wouldn't be surprised if Ryan Carniato already has a perfect explanation somewhere :)
I think it’s worth avoiding a change in design when you pass some threshold of complexity. The vanilla JS approach has some scaling limitation in term of state graph complexity, and the problem isn’t the ergonomics above and below the threshold, but discontinuous change in ergonomics when you cross that threshold
Indeed at a certain scale the "easy" approach ends up becoming a mess. A simple counter isn't complex enough but this is a great idea and would be a positive for the language.
Well said
I dislike both examples but find the Signals one far worse, no question.
You can do it that way, but… why? When you could just not?
I agree the initial example is easier to read, but it has problems as stated.
> Am I the only one that thinks the vanilla js example is actually easier to read and work with?
Even if that were true for this example, the signal-based model grows linearly in complexity and overhead with the number of derived nodes. The callback-based version is super linear in complexity because you have an undefined/ unpredictable evaluation order for callbacks producing a combinatorial explosion of possible side effect traces. It also scales less efficiently because you could potentially run side effects and updates multiple times, where the signal version makes additional guarantees that can prevent this.
I was wondering about this awkward code:
One would think that proper integration into the language also means getting rid of those "noisy" setter/getter calls.counter.set(counter.get() + 1)I much prefer the explicit get/set methods. MobX I think used the magic approach as did svelte and I believe svelte have realized it's a mistake. It makes it harder to reason about the code, better to be explicit.
Definitely not. I prefer the vanilla version as well.
When they added Promises to JavaScript, I bristled at the thought that I might have to start writing `new Promise` everywhere.
In practice, I can count on two hands the number of times I’ve written `new Promise`. What did happen, though, is I started to write `.then` a whole lot more, especially when working with third party libraries.
In the end, the actual day-to-day effect of the Promise addition to JavaScript was it gave me a fairly simple, usually solid, and mostly universal interface to a wide variety of special behaviors and capabilities provided by third party libraries. Whether it’s a file read or an api request or a build step output, I know I can write `.then(res => …)` and I’m already 50% of the way to something workable.
If this Signal proposal can do something similar for me when it comes to the Cambrian explosion of reactive UI frameworks, I am in favor! What’s more, maybe it will even help take reactivity beyond UI; I’ve often daydreamed about some kind of incrementally re-computed state tree for things other than UI state.
I assumed the Promises was added primarily so that async/await could be added, which is where the really substantial quality of life improvement is. In practice you rarely need to explicitly go “new Promise” yourself.
While .then in initial promises was a great improvement over nested delegates and is fine for simple chained promises, once you start conditionally chaining different promises or need different error handling for particular chains in the promise, or wanting to do an early return, the code can become much harder to read and work with.
With async/await though you just write the call essentially as if it’s not a promise and can easily put try/catch around particular promise calls, easily have early returns, etc.
> In practice you rarely need to explicitly go “new Promise” yourself
However you do quite often need to use Promise.all, even when using async/await.
Also, wrapping callback APIs or containing side effects:
There are some use cases for the Promise class still, and it’s great to have that kind of control facility at hand when you need it.function sleep(ms) { return new Promise((resolve) => setTimeout(resolve, ms)); } await sleep(100);Promise.withResolvers can fill this gap now (though it's not natively available in Node, yet):
function sleep(ms) { const { promise, resolve } = Promise.withResolvers(); return setTimeout(resolve, ms), promise; }
I don't think that's an issue with JavaScript though. It's inherent complexity in your code's handling of concurrent operations, not incidental complexity arising from the language.
I've literally never needed to do that. What's a real world use case?
Doing multiple async tasks concurrently as opposed to in sequence. If you have never used this you have either worked on extremely simple systems or have been leaving a ton of perf gains on the table.
Or, like most people, they don't work with JS outside of the frontend where fetching multiple things in parallel is rarely needed.
How is this different from await a, await b, await c?
If each await was to a setTimeout call waiting 1000ms, awaiting all 3 would take approximately 3000ms.
If you await a Promise.all with an array of the promises, it will take approximately 1000ms.
In summary, using individual awaits runs them serially, while Promise.all runs them concurrently.
If you’re doing CPU bound work without workers, it doesn’t make much of a difference, but if you’re doing I/O bound tasks, like HTTP requests, then doing it in parallel will likely make a significant difference.
I get what you're saying, but that's just not how it works in my experience. I've got a repro in codepen. What am I missing?
https://codepen.io/tomtheisen/pen/QWPOmjp
function delay(ms) { return new Promise(resolve => setTimeout(resolve, ms)); } async function test() { const start = new Date; const promises = Array(3).fill(null).map(() => delay(1000)); for (const p of promises) await p; const end = new Date; console.log("elapsed", end - start); // shows about 1010 } test();You're starting all three promises, then waiting for the first one to finish, then waiting for the next one, then waiting for the last one. But because they were all started at the same time, they'll be run in parallel.
Whereas if you started one promise, waited for it to finish, then started the next and so on, it would take the three seconds as they won't be run in parallel.
The code you've written can be seen as a "poor-man's" Promise.all, in the sense that it's doing roughly the same thing but less clearly. It also behaves slightly differently in terms of rejections: if the final promise in the Promise.all version rejects immediately, then the whole promise will fail immediately. However, in your version, if the final promise rejects, that rejection won't be evaluated by the await (and therefore thrown) until all the other tasks have completed.
For reasons of clarity and correctness, therefore, it's usually better to just use Promise.all rather than awaiting a list of already-started promises in sequence.
In order to use `Promise.all`, you'd still have to construct all the promises without awaiting them. That seems like the whole foot-gun and cognitive load right there.
But the early rejection is a concrete improvement over the "poor-man's" version. I'm sold.
What I imagined from your initial description is that you were doing the following:
However (as you are possibly well aware), this line in your example is starting all the work immediately and essentially in parallel:for (var i = 0; i < 3; i++) { await delay(1000); }
So the timing of your for...of loop is that the first element probably takes about 1000ms to complete, and then the other two seem to happen instantly.const promises = Array(3).fill(null).map(() => delay(1000));Promise.all is just an alternative to writing the for...of await loop:
I guess it relies on you already being familiar with the Promise API, but I feel that Promise.all() has slightly less cognitive load to read and its intent is more immediately clear.await Promise.all(promises);A strong case for preferring the Promise.all() is that Promise.allSettled(), Promise.any() and Promise.race() also exist for working with collections of promises, and unlike Promise.all(), they would not be so easily reproduced with a one liner for...of loop, so its not unreasonable to expect that JS developers should be aware of Promise.all(), meaning there is no reason for it not to be the preferred syntax for the reasons I stated above.
Ok, I'm a believer. I misled myself into thinking there was more going on with Promise.all than there really was. I'm mildly averse to allocating unnecessary arrays. But this is mostly superstition rather than measurable performance concern.
Promise.allSettled has a poor-mans implementation too. But the others really don't have such a thing.
My impression is that Promise.all() is kind of nice, but it's really not that big a deal or important. If it didn't exist, you could get the same happy-path code behavior without really even changing the size of the calling code.
But there's nothing wrong with it really. On balance, it seems slightly nicer than the poor-man's re-implementation. In the last 5 years, I might have been able to use it maybe twice.
I often map a series of X to Y with Promise.all. For example, mapping a series of image URIs to loaded Image elements.
It is shorter to write than a for of loop, and importantly, all images will be loaded in parallel rather than sequentially, which can be significantly faster.
images = Promise.all(uris.map(loadImage))You could also do it with a loop to start the promises and another one to await them. Is it just a shorter way of expressing that?
As per my other reply to you, there is a difference, a for loop will take the sum of each call to fetch the image while Promise.all will do the requests concurrently, meaning the total time will be that of the slowest request.
As per my other reply to your other reply, when I've written code that actually does `await` in a loop, it already does all the work concurrently. There's a code sample that illustrates what I'm familiar with.
On the other hand, the language designers are not random framework authors. They know what they're doing. There must be some reason why `Promise.all` exists. I just don't know what it is.
To re-iterate, I understand the difference between serial and parallel tasks. But I have also found that it's possible to do parallel tasks with `await` in a loop. So I'm still missing something.
To do parallel tasks in a loop for the situation I outlined (getting a final array of images from URIs), it would be really cumbersome, and your implementation would be pretty much equivalent to the inner workings of Promise.all.
As others have mentioned; if you aren’t using Promise.all, you are likely missing a good deal of opportunities for easier and more performant async code.
For non-rejecting promises, `await Promise.all(promises)` is basically performance equivalent to `for (const p of promises) await p;`. In case you think the second one does serial work, here's a codepen for you. https://codepen.io/tomtheisen/pen/QWPOmjp
The "inner workings" could be a for loop, with the exception of rejected promises. The only opportunity for quicker resolution is when one of the promises reject, at which time Promise.all() immediately rejects.
I think the allegations of "easier" are significantly overblown too.
You are right. I was mistakenly thinking that the alternative (naive approach) would be to write the promise-creating tasks within the for loop, but if they are not part of the loop, the functionality is basically equivalent to Promise.all.
// A -- Async and parallel const results = await Promise.all(data.map((i) => delay(i))); // B -- Functionally equivalent to the above const promises = data.map((i) => delay(i)); const result = []; for (let a of promises) { const r = await a; result.push(r); } // C -- Problematic and naive approach: much slower const result = []; for (let i of data) { const r = await delay(i); result.push(r); }
Why does it need to be a part of the language? This could be a library. There are such libraries. They are small, so including them in your code is no big deal. Adding this to the language should not even be a goal.
Thinking that the current crop of JS UI libraries designed their signals in such a good way that it needs to become a part of the language is hubris. Signals have many possible implementations with different tradeoffs, and none of them deserve to have a special place in the JavaScript spec.
Before these libraries used signals, they or their predecessors used virtual DOM. Luckily, that didn't become part of JS, but how are signals any different? They aren't. The argument for making them standard is even worse than for virtual DOM.
Are we just going pile every fad into a runtime that basically has no way to dispose of no-longer-wanted features without breaking the web? That is quite short sighted.
Good points. We don’t want the wrong thing. But we want the right one!
Reactive UI won. The main thing stopping me from using vanilla JS is the absolute explosion in complexity managing state for even small sized applications. To me, any reactive framework is better than vanilla, so perhaps there is a construct missing? Now that it’s been a decade or so, we should start thinking about possible cut-points for standardization. Like with promises, this could bring down complexity for extremely common use-cases, if done right.
I think a better way to evaluate it would be: “would this proposal be used by existing reactive frameworks?”. If not, why? What’s missing? What’s superfluous? What about lessons from UI annd reactivity from other languages? There’s a lot of fragmented experience to distill, but it’s a worthwhile endeavor imo.
If you want to do "the right thing" – implement your proposal as a library, AND convince people to use it on its own technical merits. Then when everyone uses it (because it's so obviously "the right thing"), you can start asking if anyone wants your library to be built into the language.
But that's not what you're doing. You're gunning for becoming the standard from the start – you are trying to convince people to use your draft implementation based on its status as a proposed standard, instead of them using it on its own technical merits.
Ditch the status, and see if anyone still wants it.
--
Yes, reactive UI won – just fine, without having signals in the JS standard. Because not having signals in the language was never an actual problem holding back reactive UI development in JS.
Your proposal does not "bring down complexity". It simply moves the complexity from UI libraries into the JS standard. In doing that, you forcefully marry the ecosystem to that particular style of complexity. Every browser vendor will need to implement it and support it... for how many decades?
And to what end? Unlike e.g. promises that are useful on their own, your proposal isn't nearly ergonomic enough to allow building reactive UIs in vanilla JS. Users will still need to use libraries for that, just like they do today. You're just moving one piece of such libraries into the standard, without building the case for why it's needed there.
--
Your proposal spends pages selling the readers on signals, but that is not what you need to sell. We already have many implementations of signals. You need to sell why your (or any) signals implementation needs to be in the JavaScript standard.
You have one tiny "Benefits of a standard library" subsection under "Secondary benefits" but it's just ridiculous. You're basically saying that we should add signals to JS because we've added (much simpler or more needed) things to JS before – is that really your best argument?
And... "saving bundle size"? You want to bless one implementation of a complex problem to save what, 5KB of this: https://cdn.jsdelivr.net/npm/s-js
Sorry, just – nothing about this makes sense to me.
I don’t have a dog in the fight. I also don't see yet any mainstream support behind the proposal. So I don’t get why it has to be so heated?
The main benefit is interop. Same with promises. You can implement all of promises with custom callbacks - in fact it’s trivial. But competing implementations don’t typically land on API compatibility simply because they’re solving the same problem. That causes a fractured ecosystem. Maybe interop could be important with signals? I think they should argue that, if so!
> Users will still need to use libraries for that, just like they do today.
Yes? But you reduce the lifting by the libs - ideally enabling a class of vanilla use-cases which can be made demonstrably improved. You could say querySelector was unnecessary because you can do it in lib. Or filter, or map. Standardization can cover std-lib like features too no?
Doesn’t mean I am in favor. I think you should always default to no unless strong and consistent proven benefits. But why not have good faith arguments for what problems this will or won’t solve. For instance, if hypothetically let’s say react or svelte has a different model that cannot possibly use these signals, then that’s probably a sign it’s not good. My philosophy with proposals is balancing the curiosity and honest inquiry with a grumpy defensive inquisition before saying aye. Flaming though is really not helpful.
> You're basically saying that we should add signals to JS because we've added (much simpler or more needed) things to JS before
> saving bundle size
Yes, I agree these are weak arguments.
I think you’re right, I don’t see how building this into the base library makes some things possible which weren’t before.
I think that the Promise API was not the actual thing people directly wanted (and on its own had no compelling reason to be added to the base library), but a standardised Promise API was needed in order to add the hugely useful async/await keywords which are unachievable without changes to the language.
I am however a big fan of a really good base library (it’s one of the things I love about working with .NET), but they should be focussing on functionality with the broadest reach (as in most encountered by average JS devs working on day to day tasks), e.g. things like better tools for working with dates and times.
> I think that the Promise API was not the actual thing people directly wanted (and on its own had no compelling reason to be added to the base library), but a standardised Promise API was needed in order to add the hugely useful async/await keywords which are unachievable without changes to the language.
In JS, the async syntax is largely syntactic sugar around chaining promises with `.then()`. Promises alone fixes callback hell and brings structure and scoping to multi-stage async operations - at least to the extent possible in such a dynamic environment as JS. There are only a couple minor additions relating to the main entry points, microtasks and interactions with the runtime, to make the main async/await experience we enjoy today. This is actually a good thing, when you can build abstractions on top of other core constructs, without affecting the rest of the language too much.
There may absolutely be analogous syntactic ergonomic constructs on top of signals, reactivity or observables - whichever is sensible - that could be layered on top to compose really powerful user-facing features in the future. So the success of promises and async is a still a supporting story, in my view. A lot more research and scrutiny is needed, but it’s certainly one of the top interesting ideas.
> implement your proposal as a library, AND convince people to use it on its own technical merits.
That is almost literally exactly what happened: most major JS frameworks (except React) converged on Signals.
> Because not having signals in the language was never an actual problem holding back reactive UI development in JS.
Oh, but it did hold back reactive development. There are many limitations on current implementations of signals precisely because there's no proper support for many things in the language.
> You need to sell why your (or any) signals implementation needs to be in the JavaScript standard.
That is why it is:
- a proposal that
- calls for input from implementers, users, library developers etc.
> Unlike e.g. promises that are useful on their own,
But the exact same thing happened with promises: everyone had their implementation, there was no need for a proposal to add that specific API to the standard library. Deferred (the precursor to Promises) existed for several years before Promises. Here's the full history: https://samsaccone.com/posts/history-of-promises.html
And yet, 15 years later here we are
One good argument for standardizing Signals is that debugging them seems to be a nightmare. Imagine a deep tree of calculated signals firing off each other and you need to find the source of what started the chain reaction. Standardizing will allow devtools to develop around it.
They don't fire off each other, they simply depend on each other like functions do:
It isn't a bigger nightmare than debugging pure functions. The source for `c` is `a` and `b`. All signal values (as proposed) will be lexically available in a body of a dependent signal, so there's no hidden registry to navigate anyway. If in-browser IDEs want to record a call tree for an activation record, they can do that without a standard.a = () => 42 b = () => a() - 1 c = () => a() + b() * 2There is still a watch() mechanism that from a consumers point of view hide the originating event of an update. Otherwise if all you wanted was functions, just use functions.
When watch fires out of control, you need debugging tools to understand why your render() function is being invoked more often than it should.
This type of problem happens all the time in react and you need to trace upwards to find that 7 components up the chain someone accidentally included new Date() in the state that propagates down through props and re-renders everything.
Signals are basically lazy functions. You can't "just" use functions if performance is a concern, cause that's the least efficient way to keep everything consistent.
Since watching seems(?) to be synchronous in the proposal, why do you think extra debugging tools are needed? You can breakpoint into render() with a regular debugger and look at the stack trace.
You can say this about most things in the standard library. But as stated in the motivation, there's an ongoing trend to extend the rather small standard library js offers, so you don't need to have a package for each and every common task.
You can argue about the need for this, but if we're going to extend the standard lib, then looking at what is popular is a good approach IMO.
> Before these libraries used signals, they or their predecessors used virtual DOM
Signals are not a replacement for virtual DOM.
Remember the Observable proposal?
The Observable proposal made a stronger case in my opinion, since Observables provide an interface that is functionally unique and useful for the boundaries between app logic and libraries, so a single standard approach has benefits. Signals on the other hand live right where application state binds to the ui. Is this really somewhere that people are patching together a hodge podge of libraries that need to use a consistent api? I'm not so sure.
Signals are used in every framework except React. Observables weren't. That's a huge difference.
There's the old one that fizzled out: https://github.com/tc39/proposal-observable
And there's the new one which seems to be getting implemented in node right now: https://github.com/WICG/observable
When I need to signal something across my application, I use events:
window.dispatchEvent(new Event('counterChange'));
And every part of the application that wants to react to it can subscribe via window.addEventListener('counterChange', () => {
... do something ...
});
Anything wrong with that?Historically, this example is the reason why the Web evolved into jQuery and from there forked into the world of Angular and React mainly.
Event handling is getting messy very easily. If you want to get deeper into it, have a look at event bubbling and propagation.
Large applications need a robust event handling. This is the nowadays hidden benefit of frameworks like Angular, Vue etc.
Believe me, you don’t want to use the standard event handling API without a framework. Adding, deleting, cloning, firing, removing, fire once etc on many elements can have serious unwanted side effects.
>If you want to get deeper into it, have a look at event bubbling and propagation.
In this example, the event is on the Window. There is no bubbling. It is already at the top level.
>Believe me, you don’t want to use the standard event handling API without a framework. Adding, deleting, cloning, firing, removing, fire once etc on many elements can have serious unwanted side effects.
I don't know what this means. The frameworks do not have much to do with this topic.
I think the previous comment discusses events in a general sense.
Frameworks batch changes so that updates are efficient, and in many cases figures out the "correct" order of doing things. If you do all of those yourself in a large and complex UI, very likely you are updating the DOM less efficiently than what frameworks are doing, and very likely you introduced some subtle bugs. Speaking of that from my first-hand experience.
I use events extensively in large applications and its never been a problem. In fact they solve complexity.
What kind of side effects specifically?
I believe memory leaks to start
Memory leaks can occur when a component adds events to an element outside of the component (such as the window) and then gets removed from the DOM without removing the event handler from the window. This is solved in native Web Components by the mount/unmount methods where you can run code to remove event listeners when the component has been unmounted.
For other event listeners, they get removed when the DOM element is removed.
The frameworks do not solve this to any greater degree. They also just make everything invisible and behind-the-scenes and hard to debug due to their declarative nature, but that is another topic.
Example with web components in all frameworks https://webcomponents.dev/blog/all-the-ways-to-make-a-web-co... and an other version to experiment with https://jsbin.com/yiviragiba/8/edit?html,css,js,console,outp...
Removing event listeners upon some component being removed actually sounds like a great use case for the new FinalizationRegistry API[1]
[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
This is one of those points where you need to link to some kind of data or conclusive result actually showcasing this, because you're arguing against a pattern that's been in browser ecosystems for decades. I've done this for larger applications and haven't experienced issues.
Being charitable, the best I can imagine right now that'd cause memory leaks is someone running into the old school JS scoping issues and capturing something in handlers that they shouldn't. That's not the handler itself that's the problem, though - that's the developer.
(Yes, we could rant on and on about the poor design decisions that JS has built in, but that's been beaten to death)
Are you implying that there are memory leaks in browsers' internal implementation of events? Because my take is that the problem is with "user space" scripts not cleaning up after themselves, and I don't see how that would get better by adding yet another API to be mindful of.
I believe this would be more related to something like memoizing a DOM structure in a "Live" listener that is later removed from the DOM but not garbage collected due to the reference in the event listener. As the poster mentioned, developer error -- not a fundamental language or browser implementation flaw.
If a language or browser implementation can't reclaim this unused memory thus creating a memory leak, that arguably is a flaw. It's literally a DoS attack vector that can be exploited by the untrusted scripts that run in the browser sandbox.
According to TFA, event emitters / observables cause unnecessary work when called multiple times.
The difference with signals is that the resulting value is only ever calculated when the end consumer reads the value- so you schedule render updates asynchronously from the actual writes to the signal, and whatever chain of computations the watchers perform is done just the one time during the render.
Interim values sent to the signal will get lost, so you really can't do too much interesting work in them. It's really just a fancy abstraction layer to coordinate a rendering cycle.
Signals are also just pub/sub, but with a more ergonomic api. More ergonomic because listeners are added and released automatically.
It can also be more performant, eg, say you have a computation that depends on 2 values:
`result = a ? b : 0`
Then if a is falsy, we don't need to recompute if b changes. This is achieved automatically with signals, but would require quite some code with classic pub/sub.
I've been using patterns like this for more than a decade. The thing that's hard is, down the road you could have a listener, which triggers a other event, then another event, which comes back to the first routine and now you got a listen-loop that won't quit.
And it's hard to ensure that all listener don't cause that trigger cascade.
Isn't it possible to define all listeners / publishers in a declarative way which can be compiled to catch for this issues?
Yes. It is. And most pub/sub or Observer architectures and design patterns have the "loop" thing solved just fine.
In fact, the GOF spend an entire paragraph on the problem of complex update semantics in "design Patterns" (1995) ch Observer p299.
So, while it is a real problem, it's one that has been solved (for at least 29 years)
In fact I solved a similar issue in my JS reactive micro library in just 500 bytes: https://github.com/jbreckmckye/trkl
I was not aware of trkl. Well done! I wrote rmemo (Reactive Memo) which solves the same problems & is a similar size. Different semantics though.
https://github.com/ctx-core/rmemo
Nanostores is also small https://github.com/nanostores/nanostores.
And if you only need reactivity in the browser (not server side), VanJS is small & includes reactive primitives. https://vanjs.org/
That looks great! Thanks for sharing!
Yes, I want to mention, since I was up-thread, it is solved but not all tooling or environment are setup for that.
When the app is small you likely don't need it. And then it grows and you absolutely do.
It's better, in greenfield to use framework/tooling that is ready for it.
Good to hear you found it solved!
> It's better, in greenfield to use framework/tooling that is ready for it.
I don't really agree. Tooling, and even more so, Frameworks, come with giant trade-offs. Some are "paint-in-a-corner" trade-offs. So I would caution against pulling in a framework just to solve potential future issues. So much so, that I think it is one of the top10 things that will cause your project or startup to fail or get into serious trouble. It's really a form of "premature optimization".
I like signals as primitive but in this particular case they make it roughly about as easy to create loops accidentally as events do. I don't think this problem gets better or worse with signals.
Signals are a bit better because they only propagate when computed values actually changed. This is an implicit fixed point/stopping condition that doesn't intrinsically exist with events.
> Anything wrong with that?
It has all the downsides of the pub/sub architecture highlighted in the proposal.
It's often desirable for UI to be described in a declarative fashion, i.e. instead of where you have "do something" (set button color to red), refactoring so it becomes "is something" (button is red if state is x)
I might not be describing that well, because once you go down that road it really becomes a whole overall approach that infects the whole program (like functional reactive programming), and so it's really about how the whole flow fits together from top to bottom, and that can be very elegant.
I don't think that's the right fit for everything, i.e. in gamedev it might make more sense to just update some object's position imperatively, but for UI it tends to work pretty well.
The problem with events handling are, you don't actually know what needs to be done when the event trigger. Let's say you have 20 components, on counterChange, which of the 20 components need to be updated? And how? You can either do it the simple (and very inefficient), and it's React conceptually way by render all 20 of your components again with new value of counter, i.e.
or, you have to check the components on case by case basis on a. if the component need to be updated or not, and b. how is the most efficient way to update such component. And in TFA, if counter is changed from odd to odd, the label doesn't need to be updated.window.addEventListener('counterChange', () => { element.innerHTML = 20components.map(|c| c.renderHTML(newCounterValue)).join(''); });Also, multiply the number of events to the number of components can make the application go out of hand very quickly.
If the component only wants to rerender when the counter changes from odd to even or even to odd, it can cache the value and do as it pleases.
But then you have to make sure that the element caches that value. With signals you don't have to do that. Your component (or parts of it) will never be re-rendered if it never uses the signal.
This pattern is precisely what everyone's favorite "look, JS/electron can be high performance!" example uses. (VS Code).
LGTM! Smells like Redux (in a good way). But then ultimately at the root you probably want the event to update your “model”, and then that leads to an update of the “view”. This is the part where signals can be useful.
Look at legend-state, it's most definitely not Redux (in a good way IMO).
Mostly the teardown logic and inevitable memory leaks.
A alt proposal would be some kind of auto remove listener if it goes out of context
The new `using` feature handles RAII just fine. This proposal is entirely unnecessary given the existence of `using` and the already-existing event listeners. https://iliazeus.github.io/articles/js-explicit-resource-man...
Nice, but mixing typescript into a proposed standard just confusing.
The standard doesn't have anything to do with TypeScript, not sure where you got that from? https://github.com/tc39/proposal-explicit-resource-managemen...
Im referring to the link. Trying to demonstrate a proposed feature usingvtypescript.
Quite a good link, thanks.
Maybe a stupid question, but isn't the memory released anyway when I close the tab? So why do memory leaks matter?
It depends on how much you're leaking, and how fast.
Best case scenario, it just slows down garage collection a little bit, as you're holding into a lot of references that aren't going anywhere.
On the other hand, I recall a bug in a particular version of AngularJS where component DOM nodes wouldn't get cleaned up when navigating with the router unless you manually set all of the scope values their templates used to null.
We had a data dense application with tables and what not, and you could clearly watch memory jump tens or more megabytes flipping between pages in the chrome dev tooling.
Eventually (this was a SPA intended to be used for upwards of hours at a time) the page would crash.
Not "stupid", but maybe hasty / shallow? Many SPAs are long-running, and memory leaks can accrue quickly, destroying performance.
Not a web dev person, but some "tabs" have a very long lifetime, e.g., webmail clients, Whatsapp, etc.
The problem is if you don't close the tab, the memory leak can cause that one tab's memory to balloon to 1GB or more because memory isn't being released, when that tab should only be consuming 50MB.
Have you ever seen Chrome’s “aw, snap!” screen (if you use Chrome)?
More often than not that’s due to a memory leak.
If you mis-type ‘countenChange’ it could be quite frustrating!
If that truly is your reason to choose or forego a software design pattern, than what the h. are you doing with JavaScript?
In 2024 we have linters and other static analysis tools for catching these kinds of things right in the IDE.
> Anything wrong with that?
Well, don't events only bubble upwards? You need to know the exact element of it is not on a lower level in the DOM tree.
Events were too messy, so I wrote a small pub/sub message queue type of thing. Anyone anywhere in the DOM can subscribe to messages based on subject regexes.
Makes things a lot easier, especially when I added web components to wrap existing elements so that publishing and subscribing is done with attributes, not js.
You can also fire events on pretty much any object, essentially creating a channel where the message bus queue is the vm event queue.
> You can also fire events on pretty much any object,
Only if you have a reference to the object.
The reason I made up my own message queue pub/sub is because events required a lot of complexity in acquiring the correct reference to the correct object or subtree in the DOM.
With pub/sub type message queue, any element can emit a message with subject "POST /getnetpage" and the function listening for (subscribed) POST messages emits a RESPONSE message when the response comes back. This lets a third element listen (subscribe) for a "RESPONSE FROM /getnextpage" subject message, and handle it.
None of the 3 parties in the above need to have a reference to each other, nor do they even have to know about each other, and I can inject some nice observability tools because I can just add a subscriber for specific message types, which makes debugging a breeze.
> Anything wrong with that?
It only works in browser environments.
Node has EventEmitter: https://nodejs.org/en/learn/asynchronous-work/the-nodejs-eve...
See, for instance, https://www.electronjs.org/docs/latest/api/ipc-renderer
Browser environments don't have EventEmitter. Different event APIs between browsers & nodejs. A library (3rd party or custom) is needed to make event code isomorphic.
As an end user, I love when web applications pass events through the DOM tree, because it allows me to easily create plugins by hooking on the same events. Youtube is a prime example of how to do things right.
Unfortunately, modern frameworks really want to use their own event channels, which makes hooking a pain.
Welcome to Node.js v21.6.2. Type ".help" for more information. > window.dispatchEvent(new Event('counterChange')) Uncaught ReferenceError: window is not defined$ node Welcome to Node.js v20.6.1. Type ".help" for more information. > const target = new EventTarget() undefined > target.dispatchEvent(new Event("counterChange")) truetyvm!
Even better on Node, you have EventEmitter. A much simpler interface.
You have to name the function or you cant remove it :P
Very hard to debug as the application gets larger
Did you read the document? They have an example there, which is quite similar to yours, and explain what is the problem.
No, they don't. In fact, if you search "event" in the proposal you get exactly one result, the prefix of "eventually". This is a serious shortcoming of the proposal that should be addressed.
I think the comment you're replying to is referring to the pub/sub sections of the proposal. They don't explicitly mention events, but events are a subset of the publish/subscribe pattern.
But then so are signals.
The only "benefit" signals as proposed here give you is less control over the exact dispatch pattern of the graph, for instance things like debouncing, throttling, batching, etc etc etc. Aka all the things you absolutely must have control over if you want to make something resembling a high performance application.
Signals are not a subset of events. Signals combine event subscription with value construction, which promotes a remote declarative model for updates.
> The only "benefit" signals as proposed here give you is less control over the exact dispatch pattern of the graph, for instance things like debouncing, throttling, batching,
Events don't give you any more control over those properties than signals, they just require more boilerplate.
Oh, come on. They do have an example of architecture, not literal example of events. It doesn’t matter what trivial implementation of Observer pattern you choose.
Except one is already built into the language? And it provides all the upsides of signals with none of the downsides? (Modulo proper use of `using` directives)
Can you please point to where in the document such an example was mentioned? I rechecked the document and couldn't find it.
Just look for the example of a design pattern („Example - A VanillaJS Counter“ section), not for a literal implementation of it via events. Conceptually they are the same.
I’ve been trying for decades to understand why people find it so hard to keep track of state and update the DOM.
Sure, it requires a bit of discipline, but it’s vastly simpler to me than whatever solution comes up every few years (Backbone, Knockout, Angular, React, modifying the language itself, etc). There must be something profoundly different with the way I think.
It even expresses itself in the function naming. They call updating innerText “render”. You’re not rendering anything. At most, the browser is, but so is everything else it does related to painting. It feels like a desperate attempt to complicate what is one of the simplest DOM functions. It really baffles me.
In simple applications it is easy.
More complex it is not easy.
I’ve been writing web apps for easily 25+ years. Never have I reached for React and friends voluntarily. But again, I know I’m in a minority. I’m just not completely sure why.
You could work for 50 years and never need those… it just depends on what you’re creating / how many devs and so on.
I know some guys who over the years wrote their own framework. It works great… for them.
Right, the only thing that convinces me is team and hiring dynamics. But that’s not what these tools advertise.
It’s always like: you have dozens of interactive controls in this view, it’s getting out of hand, you should use this language that compiles to HTML and JavaScript and carry all these dependencies. To which I always reply: no thanks, I rather deal with the dozens of controls.
> team and hiring dynamics. But that’s not what these tools advertise.
I think that’s a given for any proposed standard. We all get a common way we understand things and can even just communicate about a thing.
My argument is that it’s not their selling point. When you go to React’s page you’re not greeted with: “React is a great way to hire devs and manage a team”.
The solution they sell is technical, like state management, reusable components, etc. Which I don’t find convincing.
> Right, the only thing that convinces me is team and hiring dynamics. But that’s not what these tools advertise.
And that's fine, these tools don't solve what you search for.
A dozens of controls multiply with a dozens of events are already a quite big amount of points of failure, so people gladly trade it with the dependencies (what's the issues with dependencies anyway?).
So yeah, not many people want to be good, and only hire good people like you to use the hand tools and "just do it correctly". Most acknowledge their downsides and use proper tools to aide them.
FB's website has settings subpages that are more complicated than an average web app and is an SPA-style-mega-app made of piles of other apps. It typically keeps highly consistent state throughout. There's no doing that sanely by hand, you'll just forget something or, more likely, it will get lost between the dozens upon dozens of people needed to build such a thing.
1) Almost no one is building FB or Gmail, yet act like it. The precise reason why still escapes me.
2) For other use cases, it’s not that hard to manually update some elements in the DOM. You very quickly learn how not shot yourself in the foot. Certainly a lot easier (and faster) than dealing with the mess that is the React ecosystem.
You don't need to be building Facebook or Google to have a TON of state in a webpage. Obviously if you are doing a basic dashboard, a blog or something similar, it isn't really needed. But for more complex stuff I think react provides a much better way to handle state changes than just using vanilla js. It's not like using react makes an app more complex, because if you wanted to do the same thing in vanillajs you'd end up with a much bigger mess.
I agree that some web pages don't need any of that, but those don't usually require a lot of development anyways.
Could you give a concrete example for the more complex stuff?
Sure! Where I work we build very specialized, very low user count live remote non destructive testing software that can be run on a browser. Well, it used to be native, but clients complained a lot in part due to IT restrictions on updates. That means that we basically have to have a remote processing unit, sync the state with the front end at all time, and with the scanning equipment too. That means multiple different canvases and charts, different tabs etc that all contain state, need state to be updated sometimes in the background, and need to have consistent state changes. I don't do a lot of front end dev, but I did help in setting up our new inference module there as I'm in the AI/ML side and it would've been a nightmare to deal with even if I mostly did stuff in a webgl canvas. It was angular, which I don't really like, but is still much better than going vanilla.
I know it's pretty niche, but I'd say most non-trivial (blogs, CMS, forms) front ends handle a lot of state. If they don't, then they are relatively simple anyways. That's a generalization but still, you quickly hit the point where react/other framework becomes worth the complexity overhead versus the complexity of doing it in vanillajs
Maybe try making something like Grafana's dashboard by hand.
This sounds more like a complaint the kids aren't happy to eat turnips and mashed bugs (just pick the legs and wings out!) like we had to and that their newfangled 'sandwiches' are too fancy. What if they get caught out in the forest with nothing but sandwiches? That oughta learn'em real quick.
Haha, if you want to go with the food analogy, I'd say they are discussing over meat sticks brands without having tasted a real steak.
Or better still, running a meat stick factory because doing the dishes is hard.
But I don't know if such analogies are any good other than being funny.
If you are working in a team with 5+ developers who work on the UI where people need to be able to quickly reuse components and put them in large, complicated applications with lots of data that could be fetched and updated with HTTP requests, it is almost impossible not to use any of those frameworks.
If you are working on your own, or only create small web apps, sure, you can avoid frameworks in some cases.
> it is almost impossible not to use any of those frameworks.
Agreed, but because it’s a lot easier to just search for “React programmer” these days than it is to evaluate lots of JavaScript candidates, which has a much wider scope and proficiency level and make sure they’ll fit right in when hired.
But not because direct DOM manipulation is not inherently scalable. See puter[1] for instance, a fairly complex, 100k+ lines of code of jQuery.
I think it would help if you mentioned the kinds of applications you're building.
What do you mean by "web apps" here? My memory of the web in 1999 was that the only rich web UX was in Java applets
There was this thing called DHTML. It was no Ajax, but you could hide and show elements, change its contents, respond to events, etc. Cross browser was a nightmare. And the server side handled most of the work.
Today I build all sorts of things with lots of interactive elements on the frontend, but trying to avoid using React if I can.
Do you just use document.createElement(), getElementById(), and friends? I've written lots of web apps that just used those. But I've also jumped into React a couple times to learn if it's better or faster or whatever. I generally think it's a reasonable approach for a template style web app—ie, when you need to build a whole lot of html nodes that interact with each other in the way a complicated ui does. But I do find I can get stuck trying to reason about some of the weird reactivity stuff with useState() and useEffect(). I kinda chalk that up to me being not an expert, but it also feels like a bit of an impedance mismatch with the language itself.
But I don't think React necessary for _every_ app and it really depends on what kind of apps you are making.
Certainly you can do the original style of app where the templating is on the server and any js is just to hook up already existing nodes. The js community has more or less moved away from that "rails" style of app years ago…
> Sure, it requires a bit of discipline
I strongly suspect that in ages past you would have been an ASM programmer shaking your fist at those crazy portable C programmers, and then a C programmer shaking his fist angrily at those crazy memory safe Java programmers.
Progress in programming can be marked by eliminating the need for ceremonial kinds of strict discipline needed to achieve good results.
Which isn't to say React is some kind of next evolution, but signals certainly are a step in the right direction.
Decades is a long time. You must remember the complexities of updating the DOM between browsers, surely.
Keeping the DOM in sync with a data state isn’t too difficult but doing so in a highly performant (60 fps) way _is_ tremendously difficult. Especially when it comes to creating APIs that aren’t leaky but also not too cumbersome.
It would frankly be easier to just paint pixels to a canvas game-style, than translating changes to a living DOM tree.
If you need 60fps you shouldn’t use the DOM. It’s not a game engine. Like you said, go with canvas.
60fps is not a high bar. A timer that gets updated every 10 millisecond (or in truth whatever the next event loop interval) is commonly used as tutorial, and there is no reason such a widget does not have a high refresh rate.
The point of the original comment is that DOM updates should be fast, efficient and avoid visible delays to user's eye, which is not easy. If you are not careful, small changes in a large application could lead to too many DOM updates -- that is where frameworks shine.
And canvas is not the solution to everything and has its own problems. To begin with, accessibility.
I agree with all of it. I did recently a timeline with hundreds of elements, 3D transforms etc. Everything was smooth and not because I was particularly clever, but because browsers are fast. All direct DOM manipulation.
My complaint with modern frameworks is not FPS, but rather that a hello world often requires 20k files plus a compilation step and the upshot isn’t particularly clear to me. I’ve never been to one of its landing pages and thought: wow, that’s a problem I have and this looks like the perfect solution.
Conversely, I clearly remember the first time I saw jQuery. Loading content via Ajax and fading in when done, all in 3 lines of code and cross browser. I was sold at that very second.
I work on FX trading applications with hundreds of thousands of lines of code. Full fat desktop application replacements. Good luck not using a framework. 20-30 developers with multiple clients with their own devs.
Preach brother! My sentiments exactly. And I've developed SPAs that are massively complex and highly interactive, and whatever problems these folks are referring to that stuff like this supposedly solves, I've not yet encountered.
Promises are a nice success story, but without async/await it wasn't really necessary to standardize
> The current draft is based on design input from the authors/maintainers of Angular, Bubble, Ember, FAST, MobX, Preact, Qwik, RxJS, Solid, Starbeam, Svelte, Vue, Wiz, and more…
Would be interested what existing library authors think of this proposal. Interesting that React is not in that list
Signals are a bit like channels, except they're broadcast instead of single receiver. It'd be neat if this could somehow be leveraged to allow web workers to communicate with channels instead of onMessage callbacks. Specifically being able to `select` over signals/channels/promises like in Go would over a syntactic benefit over having to try manage multiple concurrent messaging mechanism with callbacks (maybe by allowing signals to be included in `Promise.any`)
> without async/await it wasn't really necessary to standardize
Hard disagree.
`x instanceof Promise` simply doesn't work. If my library has a then method that accepts a catch callback and yours doesn't, they're silently non-interoperable, and there's no way to detect it. When does `finally` run? What expectations can you have around how async the callbacks are? Without a standard, every single library that uses promises needs to bring its own polyfill because you can't trust what's there. And you can't actually consume any other library's promises, because you can't trust that they behave in the way you expect them to.
And I'm not just speculating, this was reality for many years and a hell that many of us had to endure.
> Promises are a nice success story, but without async/await it wasn't really necessary to standardize
One benefit of standardisation that's not tied to async/await is that the JavaScript engines has been able to do performance optimisations not otherwise possible which benefit Promise-heavy applications
> “Interesting that React is not in that list”
Signals are not a part of the core React API, unlike Preact.
My vague gut feeling is that signals are too much like a generalized useEffect() and would only introduce further confusion into React by muddling what happens during the render cycle. For better and worse, React takes a different tack to updates than signals do. But maybe I’m wrong about their applicability.
There is an interesting debate about React and signals in the comments of this article, between Dan Abramov and Ryan Carniato - https://dev.to/this-is-learning/react-vs-signals-10-years-la...
I read it until the point where he defends the idea that these two functions obviously do something completely different:
It honestly made me wonder whether the article was dated April 1 and I’d been had.function One(props) { const doubleCount = props.count * 2; return <div>Count: {doubleCount}</div>; } function Two(props) { return <div>Count: {props.count * 2}</div>; }More generously, JS framework design is hard. If you’re ambitious at all, you end up fighting the language and your runtime paradigms will hang like ill-fitting clothes on its syntax. The One/Two example above shows how easily expectations break in this world of extensions to extensions. There’s no way to know what an apparently simple piece of code will actually do without knowing the specifics of a given framework.
> I read it until the point where he defends the idea that these two functions obviously do something completely different
> [code]
> It honestly made me wonder whether the article was dated April 1 and I’d been had.
They don’t do anything different if your model of components is that they rerun. But that model is only one way to implement components, and JSX is unopinionated about semantics exactly like this. Intentionally, by design.
If you’re only familiar with React and other frameworks with a similar rendering model, of course it’ll be surprising that those two functions would behave differently. But if you’re familiar with other JSX implementations like Solid, you’ll spot the difference right away: components don’t rerun, only the JSX does. The first function will always render the same thing because `doubleCount` is set up on component creation and static for the remainder of the time the returned div is mounted.
You are welcome to prefer React’s model. It certainly has some cognitive advantages. But it’s not inherently the only correct model either.
> There’s no way to know what an apparently simple piece of code will actually do without knowing the specifics of a given framework.
Yes. In solid-js JSX interpolation needs to be read as having implicit lambdas. You need to know how a framework works to use it
It's somewhat what the discussion was getting at between Ryan & Dan. solid-js having fine grained reactivity involves a lot of lambdas, in JSX they're implicit, but code outside of JSX has to spell them out explicitly
There are those of us for which this actually makes sense. More sense than react anyway.
My feeling is that it's philosophically outside the purview of React whose focus is rendering (and components and their state but not global state), RxJS and and MobX are both usable with React and have signals whilst Redux goes another route and React is "above" that choice.
Rendering in React would benefit greatly from fine-grained reactivity (which what signals offer). However, for some reason the React team insists that the lowest level of reactivity in their system os the component, no matter how large or complex it is.
React isn’t in this list because its effects are declarative, not imperative (except for props changes and re-renders which you could argue are in fact declarative, just one abstraction removed). UseEffect neatly compartmentalizes imperative behavior.
This looks a lot like ember data binding which becomes an imperative nightmare. Its default state is “foot gun” with tons of cognitive overhead and meta patterns to keep it from getting that way.
Nah I use Legend-State with Context to provide component stores and it's much nicer than the foot gun riddled useEffect standard React. Fine grained reactivity is fantastic.
Maybe they can call it "EventEmitter"
https://nodejs.org/en/learn/asynchronous-work/the-nodejs-eve...
That looks like it fits with the theme, but events are difficult to compose & tend to have easy leaks by requiring explicit attach/detach (not that I'm sure the proposal here addresses those issues)
Could you not just compose a new EventEmitter constructor that uses the new FinalizationRegistry API to drop all handlers of subjects which have since been garbage collected?
React isn't in the list because the creator had a bad experience with backbone.js in 2013 and dislikes Signals. The team have kept true to his preferences without considering modern Signal approaches, concepts like components, dependency injection and TypeScript, npm packages which provide better encapsulation, discoverability and modularity than the state of the art in 2013.
I didn't understand the example in the linked README.
// A library or framework defines effects based on other Signal primitives
declare function effect(cb: () => void): (() => void);
What library? What framework? I lost here. What's effect? effect(() => element.innerText = parity.get());
How does effect knows that it needs to call this lambda whenever parity gets changed? Will it call this lambda on any signal change? Why this talk about caching then? Probably not.Anyway I think that signal idea is sound, if I understood correctly what the authors tried to convey. My main issue with those decoupling architectures is that once your application is complex enough, you will get lost trying to figure out why this particular event being emitting. Ideally signals should fix this by modifying stacktrace, so when my callback is being called, it'd already contain a stacktrace of the code which triggered that signal in the first place.
> What library? What framework? I lost here. What's effect?
There are various libraries that export a function called effect which allows you to run arbitrary code in response to a signal update. The Preact docs have a great primer on signals and effects: https://preactjs.com/guide/v10/signals#effectfn
As I understand it, these effect functions run the callback once initially to see which signals were accessed while the callback was executing, and then call the callback again whenever the signals it depends on update. As long as signal access is synchronous and single-threaded, you know that if a signal was accessed during the callback's execution that the callback should be subscribed to those signals.
> How does effect knows that it needs to call this lambda whenever parity gets changed? Will it call this lambda on any signal change?
You can do this with getters [1], where the effect function tracks which properties of the signal were accessed in a getter method (I believe Vue historically did this in version 2), but you can also track object access using proxies [2]. The example from the proposal simply has a 'get' method that is called to access the value of the signal, and executing this method allows dependencies to be tracked.
[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
[2] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
> How does effect knows that it needs to call this lambda whenever parity gets changed?
The call `parity.get()` will register a dependency on the function that is passed to `effect()`. When `parity` is updated, the function is called.
> Will it call this lambda on any signal change?
Only when a dependent signal changes.
In this case, `parity` depends on `isEven` and `isEven` depends on `counter`. So when `counter` is updated, that whole dependency chain is invalidated leading to `parity` getting invalidated, and that callback re-running.
This works well until you accidentally have an if/else branch, then you get hard to track bugs (halting problem in the general case).
I'm guessing this is why they don't propose to add this function to the standard: That fact makes it not very pretty.
if/else isn't a problem. If the condition isn't dependent then update of unused branch doesn't matter
Hmm, can you explain how if/else branches cause hard to track bugs?
> How does effect knows that it needs to call this lambda whenever parity gets changed?
Basically all implementations of signals (by whatever name) build a dynamic dependency graph, where edges are established by reading nodes. In a tracking context like this hypothetical `effect`, the read also establishes an edge between the signal’s state node and the effect’s computation node—effectively subscribing the latter to subsequent writes to the former, in order to determine when to rerun the computation.
1. effect is any function you want to invoke
2. with signals the dependency tracking mechanism knows what values need to be recalculated and as a result the system knows which functions to call again
I suspect the watcher is needed by the implementation of effect
Related is S.js: https://github.com/adamhaile/s
I love signals. I prefer them when making UIs over any other primitive (besides, perhaps, the cassowary constraint algorithm). I try to replicate them in every language I use, just for fun.
I also don't believe they belong in the Javascript language whatsoever. Let the language be for a while, people already struggle to keep up with it. TC-39 is already scaring away people from the language.
If people are getting scared away from javascript, imagine what a popular one would look like!
This looks very much like mobx, which is my favorite JS effect system.
Here is the mobx version:
import { observable, computed, autorun } from 'mobx';
const counter = observable.box(0);
const isEven = computed(() => (counter.get() & 1) === 0);
const parity = computed(() => isEven.get() ? "even" : "odd");
autorun(() => {
element.innerText = parity.get();
});
// Simulate external updates to counter...
setInterval(() => counter.set(counter.get() + 1), 1000);Well, mobx is signals. But signals where the dependencies are tracked implicitly via the proxy ovject, instead of explicitly by a getter.
"let's bake my current framework du jour into the standard library!"
It's a bit like tattooing your girlfriend's name onto yourself.
Except that's not what this is doing.
It's baking a building block into the standard library, that most frameworks have converged on using.
Promises became widely used, then they got included in the standard library. This is like that.
I don't think signals will help move development away from further complexity, and that's really what we need today.
There's a fundamental question of why modern sites/apps reach for patterns like signals, memorization, hybrid rendering patterns, etc. I wouldn't begin to claim I have all the answers, but clearly there are gaps in the platform with regards to the patterns people want to implement and I'm not sure that jumping to signals as a standard helps better understand whether its the platform or our mental models that need updating to get back in sync.
Personally I've found code much easier to maintain when the frontend is only responsible for state that truly is temporary and doesn't live on the back end at all. For me any persisted state belongs on the server, as does any rendering that depends on it. This largely makes signals unnecessary, very few apps have such complex temporary state that I need a complicated setup to manage it.
Looks useful, but what baffles me is.. Why is every framework setting state or their "signals" using "setX" functions? What's wrong with the built in getter and setters that you can either proxy or straight up override?
This feels arguably cleaner: something = "else";
Than: setSomething("else");
Some libraries that feature signals style reactivity do use getter and setters (ember.js, and mobx are 2 good examples). However it makes sense for the primitive API to use functions since getter and setters are functions under the hood and get applied to an object as part of a property descriptor. It's also not always desirable to have a reactive value embedded in an object, sometimes you just want to pass around a single changeable value.
As for why some libraries choose the `[thing, setThing] = signal()` API (like solid.js) that's often referred to as read write segregation. Which essentially encourages the practice of passing read only values by default, and opting in to allowing consumer writes on a value by explicitly passing it's setter function. This is something that was popularized by React hooks.
Either way this proposal isn't limiting the API choice of libraries since you can create whatever kind of wrappers around the primitive that you want.
That is not observable for the primitive JS types that aren't objects and have no methods or properties or getters/setters (string, number, boolean, undefined, symbol, null).something = "else";
The `some` can be proxied and observed. Most frameworks are setting up the `some` container/proxy/object so everything can be accessed and observed consistently. Whether the framework exposes the object, a function, or hides it away in a DSL depends on the implementation.some.thing = "else";One big problem is right now they are _tremendously_ slow to use. (At least through the Proxy native class). Not sure if this is an artifact of JITs or the nature of prototypal inheritance.
Because javascript lacks scope-as-an-object. You can't track `var x; x = value` through it. `setSomething()` sends a notification after an assignment. Also, DOM elements can only take raw values and must be manually updated from your data flow.
Adding these features in-browser would seriously slow down DOM and JS and thus all websites for real. So instead we load megabytes of JS abstraction wrappers and run them in a browser to only simulate the effect.
For one I can write ".set" and the IDE would auto-complete with all possible somethings that can be set, even without having the slightest idea of which ones there are.
I've very much enjoyed this kind of consistency wherever is found (having a common prefix for common behaviors, in this case, setters)
well to use setters it has to be "foo.something = else", because JS can't override plain old local bindings -- not since "with" was sent to the cornfield anyway. Once you do that, you can indeed have a framework that generates getters and setters, which is exactly what Vue 2 does. Switch to proxies instead of get/set and you have Vue 3 -- the signals API is pretty much identical to the Vue composition API.
Back in the days there was an effort to put observables in the language because they were popular (and rxjs was, too).
Glad that didn't happen. And I think everybody else is, too. Maybe we should keep that in mind when standardizing features of frameworks.
They weren't used in virtually every single frontend framework though while Signals are
Surely it would be better to fix the interoperability issues of the language? "Lets just add a single implementation of everything to the standard!" seems like quite a strange response to "users of the language are having trouble with interoperability due to false coupling."
is it strange?
Vue reactivity isn't compatible with Svelte, nor Angular.
As a counter example to your question, what if we all had competing implementations of the object primitive. Libraries would barely work with one another and would need an interop layer between them (just as reactivity layers do today!)
I'll admit I don't use JavaScript very often, but surely the state of polymorphism could be improved? For example, C++ recently added concepts, and most (modern) languages have some way to describe interfaces.
As to your counterexample, I agree with current JavaScript that would be a problem, but with good language support it would certainly be possible. For example, Rust (and C++?) have competing implementations of the global allocator, and must users will never notice.
The reactivity layers are all pretty tied into the hearts of the frameworks. There's no advantage to any framework to expose such a thing to end users to leverage a competing implementation.
As for polymorphism, even the current class syntax largely operates in the same way as the original prototypal inheritance mechanism, with a few exceptions in constructor behavior to support subclassing certain built-in objects.
You can pretty easily create run-time traits- like functions with prototpyes, the class construct is an expression whose resulting value can be passed around, mutated, etc.
For example, you can write a function that takes a class as an argument, and returns a new class that extends it.
I don't think the language hurts or hinders interop much? It has a wide range of tools that can adapt objects between different shapes, for when we do have two similar but different interfaces we are trying to bridge.
I struggle to see what more one could want. Do you have any specific features you think would help a massive ecosystem of packages be able to work together, when for example different packages have different Signal implementations?
Better ways to describe polymorphism, a better type system in general really. Look to async Rust for a great example of such interoperability.
Isn't this similar to small core vs large core debates you see in other projects, often where drivers are concerned? (ie question around where the coupling layer is materialized)
The examples only show simple values. What happens when mutating nested objects and arrays?
Other frameworks usually struggle a lot with this. With workarounds like having to override equality functions in useMemo() or call .set after a mutation even if you pass in the same instance as before.
Oh man. This sounds complicated.
The hard part of signals is similar to over complicating events.
It’s hard to debug what is going on and what to fix when things go wrong.
Hmm, if we can optimize the reactive state management in all web apps, that sounds cool.
If this is to be a base for VueJS, it should handle deep changes. They have a note about support Map and Set, but being able to to control depth is nice in VueJS. (I'd say watch() should be deep by default, since non-deep is an optimization that can lead to accidental inconsistent state.)
Streams. Generally, I find RxJS backwards. Usually you just need "state", so that should be the easiest thing to implement. But I can't deny that the programming model is beautiful. Standardizing "state" without also considering streams seems odd to me. The "computed" creates a pipeline of updates, very similar to one you'd do with a map over a stream. If RxJS didn't already exist, I probably wouldn't have cared about this duality.
Async. Sure, signals can be synchronous, but Computed should definitely play well with async functions. This is a big shortcoming in VueJS (that people work around on their own.) That also implies handling "pending computation" gracefully for debuggability. I see there's a "computing" state, but this would have to be surfaced to be able to debug stuck promises.
Exceptions. I like the idea of .get() rethrowing exceptions from Computed. VueJS is a bit vague on that front, and just stops.
> it should handle deep changes.
these can be implemented in userland via proxy -- and I think probably should, as is proven by this collection of utils: https://twitter.com/nullvoxpopuli/status/1772669749991739788
If we were to try implementing everything as reactive versions, there'd be be no end, and implementations couldn't keep up -- by pushing reactive Map/Set/etc to userland/library land, we can implement what we need when we need it incrementally, built on the solid foundation of the signal primitives.
> since non-deep is an optimization that can lead to accidental inconsistent state.
conversely, deep-all-the-time is a performance hit that we don't want to be default. Svelte and Ember take this approach of opt-in-deep reactivity.
Is dependency tracking problem statically solvable (without calling effect once and subscribing to all .get's)?
I don't think this needs to be a language feature, rather abstraction of existing features. In other words, can't this be a library?
I know that SolidJS is able to figure out dependent signals, but probably doing so on the first execution.
I believe Svelte does figure it out at build time.
Which does make me question the mention of Svelte in the proposal, and makes me wonder what the Svelte developers think of it - because IIUC they indeed don't need this (at runtime), if I'm not mistaken.
The current Svelte version does it at build/compile time. The up and coming Svelte 5 is using signals and the reactivity is moved to runtime
> In other words, can't this be a library?
You answered your own question:
> I know that SolidJS is able to
It already is, obviously. But how is SolidJS supposed to work with other non-SolidJS code? It can't. Unless every library builds support for every other library, they can't possibly interoperate.
> It already is, obviously. But how is SolidJS supposed to work with other non-SolidJS code?
Who actually writes code like this? People use some signal graph library for application code typically, I’ve never seen anyone mixing SolidJs with MobX in application code or as a consequence of a library dep.
Let me offer you a scenario: you want to build a UI web component that uses state, but you want it to be usable in both Svelte and React (with solidjs). The current situation is that it's a massive pain in the ass, because you have to move all of the state out of your code into a "driver" module that you can swap out for each of the frameworks you want to target. The entire architecture of your library is made worse (less readable/maintainable, probably less performant, harder to contribute to) in order to have a single set of UI code.
All of that goes away when the plumbing for handling state is standardized by the runtime.
Perhaps Svelte, Solid, React, et al should form a working group and hammer out an interopt standard
Those who want to develop a library that can be used by any other reactive framework. I often see SignalLike<T> type that tries to subtype it.
https://github.com/preactjs/preact/blob/757746a915d186a90954...
It's a shame that JS ecosystem is so sparse.
E.g. in Rust it's much more common to rely on existing "building blocks" like futures, tokio, syn, serde even just for basic bindings, favoring interoperability.
I think you've got it backwards. Fifteen years ago we had jQuery and not a whole lot else. Everyone just used the "building blocks" libraries of js. But then the community and the ecosystem grew exponentially, and we have innumerable choices. Python is similar: I can think of a half dozen ways off the top of my head to build a web app. At least Python was on top of standardizing things like WSGI, but even then you can see rough edges like the non-interoperable nature of concurrent code (twisted vs tornado vs greenlets vs threading vs multiprocessing vs asyncio etc).
Rust will have the same thing happen in another decade or so. It's prevalence will grow the community, which will grow the ecosystem, and the positive feedback loop will mean that there's a huge amount of choice. That's a good thing, but it will have downsides, like decreased interoperability (unless the language evolves). JavaScript was cursed for a very long time by leadership that was essentially asleep at the wheel.
>In other words, can't this be a library?
It is explicitly mentioned in the proposal. The problem with 3rd party libraries is their interoperability and diversity of implementations, which might be unnecessary for this kind of things.
To be as a long-time react.js user – these examples look like a kinda weird mix of declarative with some bitter imperatives. Like, foo.set depending on foo.get and having to manually set element innerText inside the side-effect, eww, I can only imagine how messy it can get for a somewhat more complex application.
React boilerplate for this case looks so much better in my opinion, take a look
```
function Component() {
const [counter, tick] = useReducer(st => st + 1, 0)
useEffect(() => setInterval(tick, 1000))
return counter % 2 ? 'odd' : 'even'
}```
Three lines, declarative, functional, noice.
Functional usually means the output can only depend on the input. But this is depending on some external state getting smuggled in through a hook. FWIW some people prefer the alternatives over this.
This not really correct, the useReducer hook is not some external state which is smuggled in. It is an actual input for the reconciler which defines an output of this component, so it is perfectly functional even by your definition.
The value counter changes in subsequent invocations. Maybe react redefined some standard terminology, but this is really straightforward. If useReducer is not pure, and it's not, any function that calls it is impure.
This reducer is certainly pure, you are confusing it with `st => st++`
Tell me, what's the value `counter` returned by this "pure" function invocation?
const [counter, tick] = useReducer(st => st + 1, 0)
Answer: It depends. Sometimes 0, sometimes more.
Any function that can return different results for the same inputs is impure. Therefore, `useReducer` is impure. It can return different results for the same input. React may have some alternate definition for "function" or "pure", but these words existed prior to react.
Given function is pure, for any given input state, it will return state + 1, the state is an accumulated result of all invocations, there are no arguments nor external factors that would cause it to behave differently.
There are no side-effects either, it solely relies on it's input state to produce the output, it doesn't mutate any variables outside it's scope, doesn't interact with the outside world, nor modify any global state.
Therefore it is perfectly pure by the classic definition of pure, react did not redefine anything in that matter, it is a pure functional renderer by design.
Crucially, the state you're referring to exists outside the function parameters.
Invoking the returned reducer function mutates the `memoizedState` property on the component's fiber object. The fiber object is the state external to the function. It's not one of arguments to the function.
Here's a function. Is it pure?
Note that the result is an accumulated result of all invocations. I don't think this is a pure function. Now imagine `current` was a react fiber object, accessed through a hook dispatcher.let current = 0; function getNext() { return current++; }What exactly is the difference?
It all seems right, except the reducer function does not mutate the fiber object.
Your function is certainly not pure, it boldly mutates the outside state via increment operator. It is incorrect to extrapolate this on how fiber works. The difference is that React hook, with both state and setter lives inside the dispatcher, and calling the hook setter only enqueues an update which is then handled by dispatcher, so technically speaking the hook does not mutate it's outer state directly, the dispatcher updates it's inner state while processing the queue.
Enqueuing an update is mutating an update queue.
None of these semantic games affect the fact that the value of `count` does not depend only on the parameters to the function.
I have to admit: you're perfectly right here. React of course always relied on mutable state in it's implementation – just so we don't have to. I derailed a lot here to keep this funny thread going ;) I'm still not with you on your definition of "functional", since you treated it synonymously with "purely functional". Functional means just made by applying and composing functions, and react UI is created exactly like that. There is an awesome algebraic effects proposal[1], which will hopefully will be added to JavaScript one day, then react will make use of it to become purely functional.
1: https://github.com/macabeus/js-proposal-algebraic-effects
I concede the difference between "functional" as in composition and "pure". That's a significant point.
The proposal is interesting. It looks pretty thin compared to a typical TC39 proposal though. I haven't encountered the language feature before. I'm not sure what I think about it yet. I'm doubtful whether this makes it into the standard in the next 10 years, unless react affiliates somehow take over the committee.
Immutability is a pretty good tool in a lot of problem domains. But perhaps the main thing I dislike about react (and there are many) is that not only do I "not have to" rely on mutable state. I can't decide to either. Or at least, they make it tough.
This article is missing a "What are signals" section. And yes, this does not do the job:
> Within JS frameworks and libraries, there has been a large amount of experimentation across different ways to represent this binding, and experience has shown the power of one-way data flow in conjunction with a first-class data type representing a cell of state or computation derived from other data, now often called "Signals".
This looks sufficiently advanced to warrant some language support.
This could be a much more powerful feature if signal-dependencies were discovered statically, rather than after use in runtime.
If you’ve reached the point where you agree that the library should be standardized, why not take it even further to integrate it even more?
Because there's no rule that says signals should ever be created at the top level assigned to a const variable. You could create signal objects dynamically based on user input, no current proposal or implementation prevents this. So there's no way to do static analysis on signal graphs.
> The current draft is based on design input from the authors/maintainers of Angular, Bubble, Ember, FAST, MobX, Preact, Qwik, RxJS, Solid, Starbeam, Svelte, Vue, Wiz, and more…
This is primarily the only handful of consumers of this stdlib.
That's basically every major framework apart from React and it includes some major React libraries.
really looking forward to get nice dev tools out of this work
The lack of signals isn’t remotely the largest issue with JS, and adding them has minimal impact for most users of JavaScript. The biggest issue is the lack of a standard library, resulting in npm hell in most projects.
JS does have a standard library and this proposal is about expanding it, so that’s good, right?
Unless it's half-assed, clumsy and short-sighted, and we are stuck with it forever, because standard. Until one's implementation becomes a standard thanks to the fact it wins as a library (as jQuery did), it should not be slyly forced into the language.
This is referenced in the proposal:
> JavaScript has had a fairly minimal standard library, but a trend in TC39 has been to make JS more of a "batteries-included" language, with a high-quality, built-in set of functionality available
I think the description "minimal" is fairer than "no" wrt the standard library.
Strongly disagree - it's trivial for a project to just add a small std-lib, or add lodash as a single dep, or just add it directly as source code.
JS projects exist in npm hell because people have been taught to use a library to save typing 10 characters. No standard library is going to fix. Because someone can just call in a new lib that just curries something in the standard lib with minor improvements like caching.
What do you think is missing from the "standard library" as of ES2024?
That's a complaint I've heard since ES3 if not earlier and the trend ever since ES2015 has been to address that so I'm genuinely wondering what you think the JS "standard library" (aka built-ins) is lacking right now.
Given the different environments JS runs in these days, I'm also okay with extending the definition of "standard library" to "things browsers should have as built-ins" or "things Node.js should offer as a built-in module".
It's getting better. I can build a medium-sized, modern TS/React app with 12 dependencies. A large enterprise app will be closer to 30-40 which is still a marked improvement over previous dep lists.
Off topic, but I’m wondering if anyone attracted to this topic could help me understand why JavaScript doesn’t have macros.
I’m aware of much conversation around dismissing macros, often in the context of bad dev experience — but this sounds like a shallow dismissal to me.
At the end of the day, we have some of the results of macros in the JavaScript ecosystem, but rather than being supported by the language they are kicked out to transpilers and compilers.
Can anyone point me to authoritative sources discussing macros in JavaScript? I have a hard time finding deep and earnest discussion around macros by searching myself.
Interpreted languages rarely have macros.
But more importantly, do you really want script tags on webpages defining macros that globally affect how other files are parsed/interpreted? What if the macro references an identifier that's not global? What if I define a macro in a script that loads after some other JavaScript has already run? Do macros affect eval() and the output of Function.prototype.toString?
Sure, you could scope macros to one script/module to prevent code from blowing up left and right, but now you need to repeat your macro definitions in every file you create. You could avoid that by bundling your js into one file, but now you're back to using a compiler, which makes the whole thing moot.
It turns out there might actually be a benefit of the compilation step which has been introduced now that everyone uses Typescript... would be really interesting to see macros get added, though I suspect it's too far away from Typescript's mandate to add as few new features on top of Javascript as possible
Macros don't really make sense in JS runtime spec. Since you can mostly already achieve macro level features by using eval or new Function, but it's not very efficient. Macros make most sense at build time, and there have been a few attempts at generalized build macros with various bundlers / transpiler plugins. I think the space needs more time to mature. I'm optimistic that we'll eventually see some sort of (un)official macro spec emerge.
You could start here: https://github.com/search?q=org%3Atc39%20macro&type=code
A great resource that I should have found on my own. Thank you. I’ll look through this later. Giving it a quick glance now I see some of the same language I see other places; here that macros are “too far.”
I don’t know why macros are approached with apprehension. As I briefly get at in my first comment, I’m aware of a lot of dismissals of macros as a tool, but those dismissals don’t make sense to me in context. I’m missing some backstory or critical mind-share tipping points in the history of the concept.
What could be a good set of sources to understand the background perspective with which TC39 members approach the concept of macros?
I don't quite get how these signals intend to be efficient while using pull-based evaluation. A pull-based model potentially requires touching the entire object graph to check if one value needs to be recomputed on get(). It makes for a simple but inefficient implementation.
Signals only made sense in the past of desktop programming languages, because the ones on mainstream lacked lambdas and closures.
Smalltalk, and Lisp derived ones did just fine without them.
Modern JavaScript already has them as well, no need for what is basically a closure with a listeners list.
I like the quality of this proposal, which reminds me of JSRs. It does make a lot of sense.
The first and only question that matters: is this useful as a primitive?
Im inclined to say no. Signals seem like they are a UI concept, primarily. Just use any signal lib you want. Utility libraries dont really need to understand signals.
I've been enjoying "signals" in the form of re-frame subscriptions for many years now.
They solve a neat subset of problems in front-end developments. But they don't solve all of them.
Adding it to JavaScript as language construct is unnecessary.
I’m in favour. Mobx is great. This will focus efforts on making better devtools.
From a first look, signals seem to parallel event/behaviour-style functional reactive programming (FRP). Hopefully some useful ideas from the FRP world can be brought across.
Looks bad. It adds what looks like more nonsense complexity. Also, "Signals" as a name is not descriptive to what is proposed (see e.g. Qt Signals).
Yes. Thank you. I am already tired trying to figure out how this would improve my life and annoyed at trying to read other people's code that has more abstracted crap in it.
Promises are, technically speaking, "abstracted crap".
So is abstracted crap only ok if it's already in the JS standard library?
Promises proved themselves in libraries before they got added to Javascript.
Signals have, too. And it is literally spelled out in the readme, in the introduction section
They did create a polyfill, but the readme says it was based on design input from other projects, not on aligning multiple existing designs. This at least sounds like the opposite of what happened with promises, where they already existed in multiple libraries before the Promises/A design came out.
> where they already existed in multiple libraries
Signals exist in multiple libraries with what are really minor variations on the theme.
> before the Promises/A design came out.
That's why the current proposal asks for input about design.
IIRC, promises also had multiple iterations on the design. There were calls to make them more monadic, less monadic, cancelable, non-cancelable etc.
And the original proposal looks nothing like the eventual API: https://groups.google.com/g/commonjs/c/6T9z75fohDk [1]
And new things are still being added to them (like Promise.withResolvers etc.)
[1] There's a great long presentation on the history of promises here: https://samsaccone.com/posts/history-of-promises.html
I hate promises. Writing:
let result; let error; await blah .then(r=>result=r) .catch(e=>error=3) .final(callback(err, result));
Is disgusting.
I do personally think signal is a bit of a poor naming choice, but it has become probably the most recognizable term used for the concept in the JS ecosystem. There's not a whole lot of short concise naming options either, but maybe "Reactive Values" is better?
Curious, what about using Proxies for handling state?
But the vanilla code is much cleaner and nicer to read and reason about than the proposal?!
Seriously the JavaScript ecosystem is so strange..
Sorry but I’ll be a little bit mad if this Preact/Angular inspired thing is approved and made part of the spec, while Observable was deliberately ostracised.
> I’ll be a little bit mad if this Preact/Angular inspired thing
Signals predate both and originate in KnockoutJS at least. They were popularized in recent years by SolidJS. And then adopted into Preact, Vue and others. Angular is a very late newcomer to the signals game.
Edit: and this is literally in the introduction section:
--- start quote ---
This first-class reactive value approach seems to have made its first popular appearance in open-source JavaScript web frameworks with Knockout in 2010. In the years since, many variations and implementations have been created. Within the last 3-4 years, the Signal primitive and related approaches have gained further traction, with nearly every modern JavaScript library or framework having something similar, under one name or another.
--- end quote ---
The list of libraries in the README is alphabetized, and doesn't reflect the evolution of signals in the frameworks and libraries.
Signals are actually much older concept from 70s introduced in smalltalk as a ValueHolder. Later Qt reused the same concept and called them "signals & slots".
What does this buy me over events?
Automatic dependency tracking, guarantees against circular references, improved observability and potentially better devtools
And a backdoor in a build tree
Imaginary exercise for the reader: find a sneaky dot in a patch that implements this in Firefox.
That’s not enough to compete with an existing “good enough” solution.
There's no existing "good enough" solution for reactive values in JS.
Getters and setters work pretty well.
Is a pretty easy way of handling one-shot events.addEventListener("foo", () => {...}, {once: true})Those have been "good enough" for me to build large, complex healthcare applications.
No one says you can't build large complex applications using current tools
>> Getters and setters work pretty well.
What do you mean can you explain more?
Sure.
I structure my apps with light DOM vanilla web components, using lit-html (not lit) as a renderer.
I'll use a hypothetical patient profile component as an example. Let's say that it's a top level "page" and needs to support deep linking.
set patient_id(value) would trigger a loadPatient(). loadPatient would set this.patient when loading is complete. The setter for this.patient triggers a render and paints the component.
You can expand on that as needed. Maybe sometimes I want to render that component in a modal, maybe sometimes as a slide out drawer. Maybe there are little differences in the header or something depending on how it's being hosted - I can just add a setter for something like "display_mode" (making this up) that would trigger a render on change.
It's really no different than any reactive flow, and about the same as big frameworks when you count total LOC, but it's all just vanilla and simple.
For cross component communication, custom events work great, especially the one-shot ones. In a component constructor, I can add a listener, assign a property or call a component function, and the reactive flow just goes as normal.
No framework needed.
Works with any renderer, if you don't like lit-html, there are JSX and others out there. I like lit-html because it's super fast template node clones, so you don't need to worry about calling render() redundantly, it's cheap.
Note how much work you need to do manually:
- don't forget to trigger a render from a getter. Possibly not just from one getter if the component relies on more than one data point that is changing
- don't forget to fire a custom event if this data needs to be propagated somewhere else
- don't forget to subscribe to the custom event in the places where this data is needed and trigger the render when the data is updated
- (good programming practice: ) don't forget to unsubscribe from those custom events because memory leaks
I'm not saying it's impossible to do, or that people haven't been doing this, successfully, for years across many projects. What you can have though is that same thing happening automatically: when a reactive value gets updated, all the places where it's used get updated. As frameworks/libraries with fine-grained reactivity will show you, literally only those places will get updated. E.g., you won't need to re-run a full re-render on an entire component just because some piece of data got changed.
> Note how much work you need to do manually
In practice, I find the totality to be less work and easier, otherwise I'd use a framework.
> As frameworks/libraries with fine-grained reactivity will show you, literally only those places will get updated
Maybe take a look at how lit-html works.
Oh God why.
This is a horrible idea. Instead of building some dependency tracking into this niche feature of the language, JS should come up with a generic way of enabling framework developers to clean up resources without putting the burden on users of their API to manually do this.
Cool. My least favorite, cumbersome and brittle aspect of using Ember becoming a core aspect of JavaScript functionality.
No fucking thank you.
Not a bad thing at all… but this is the same mental model provided by the so-called atom-based state management systems in React. I believe Jotai is the most popular.
Easier to port react to vhdl or verilog?
Looks like Svelte 5 only somehow worse?
The proposal is very clear that it's not trying to be pretty, it's trying to be sensible and correct so that frameworks like Svelte can build on top of them and work interoperably with other libraries and frameworks.
“Lets make my random UI state tracking framework part of the JavaScript spec”
Is this like QML property binding?
I am amazed at the ability of the JS community to consistently make things more and more complicated.
FFS there's already something named signals, since c. 1972.
Can't we just call javascript 'done'?
We keep adding things to the language, and never subtract anything, which means learning it as a language is getting harder and harder.
They don't add these to the language. They add these to the (currently basically non-existent) standard library.
Given that almost every single framework under the sun (except React) has converged on signals, it makes sense to move that into the browser. This... this is how the web is supposed to work.
I dunno. I sometimes feel the same, but a whole ton of recent features have been incredible at cleaning up code.
?? Is my favourite.
I also want set functions and possibly a match statement thingy.
You may hereby consider `with` to be removed. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
I'd be much more comfortable if they were standardizing around react or something. What is the advantage of this being a built-in feature compared to a lib for folks to actual use first hand before proposing it upstream?
- They’re saying ”the community wants less boilerplate”
- They introduce what is effectively a black-box system
- The new system is expected to handle all application state
- They try to push it for frontend folks while also remarking that it would be useful for build systems
This has the same red flags the xz saga had.
Have we learnt nothing.
Lots of ”users” here vouching for the pattern and hoping it gets adopted. I bet this gets some nice damage control replies because there’s social engineering going on here right now and most seem to not be aware of it.