Folding Promises in JavaScript
codementor.io>How can we make it better ? Let's start by removing the requirement for identity value to always be the promise.
I challenge the view that making the identity value being able to be something other than a Promise is 'making it better'. Pointless abstraction is one of my pet peeves in this industry. This looks like it has gone from a fairly straightforward, if kludgy, piece of code to something far more complex. Why not just:
const listOfPromises = [...]
const result = Promise.all(listOfPromises).then(results => {
return results.reduce((acc, next) => acc + next)
})
?Same reason given in the bluebird library documentation:
> Promise.reduce will start calling the reducer as soon as possible, this is why you might want to use it over Promise.all (which awaits for the entire array before you can call Array#reduce on it).
Whether this is ever necessary is another matter :)
Is identical, doesn't use 'cool' reduce features but is much easier to read in my opinion.let accumulator = 0 for (let item of array) { const value = await item // your code here }Wouldn’t this code only execute 1 promise at a time? I thought Promise.all allowed promises to be resolved in parallel
Indeed. You most likely should do `await Promise.all` and then do the reduction.
If item is already a promise, and not a function returning a promise, they would be "executing" in parallel.
Sorry, in the sense of being identical to the original code in the linked post (reduce), not the comment.
I suppose this might be useful in situations where you are querying an API(s) with multiple requests and some will certainly return seconds before others.
This way you could have the same reducer handle the results and begin updating the UI as the results come in.
An example real-world app might be a price comparison tool or social media aggregator.
> I suppose this might be useful in situations where you are querying an API(s) with multiple requests and some will certainly return seconds before others.
But it's still a serialized operation so the parallelism is still limited. What's really needed is a "parallel reduce" using something like C's select function that will reduce in an arbitrary order using any promises that are ready at any given step.
A nice use case for sure. Seems possible with some kind of iterator/generator wrapper rather than the mess in the OP however.
The example isn't using Promise.reduce.
> Pointless abstraction is one of my pet peeves in this industry. This looks like it has gone from a fairly straightforward, if kludgy, piece of code to something far more complex. Why not just: [code]
Your example code works just fine for promises of course, but not all monads support a coalescing operation like Promise.all.
So even though this article only discusses folding over Promises, the core idea here can be generalised to any monad type (such as Promise, Result, Option, or anything else)
> not all monads support a coalescing operation like Promise.all.
Actually, they do. Haskell calls it sequence :: (Traversable t, Monad m) => t (m a) -> m (t a) [1]
It works by consuming the structure outside the monad and rebuilding it inside. A possible implementation specialized for lists is
[1] http://hackage.haskell.org/package/base-4.10.0.0/docs/Prelud...sequence [] = return [] sequence (h:t) = do h' <- h t' <- sequence t return (h':t')Sorry, I should've been more clear. You're right - you can absolutely build sequence out of the bind operation for any monad.
Promise.all is not just sequence though, there's some additional subtleties to it. In particular the fail-fast behaviour:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
That's the kind of fundamental coalescing operation that you cannot implement with bind on plain monads.
I don't think this is very well written. It doesn't start with any motivating problem, it introduces terms (functor) without defining them, and a lot of what is discussed doesn't apply to solving the problem.
> Folding Promises in JavaScript
Or: How to make simple things complex and make a codebase a complete puzzle for those that come after you?
I feel nothing has improved code readability like the recent mainstreaming of map/filter/fold/reduce and "const all the things". This type of code is so easy to follow, reason about and trivial to debug at every step, once you internalize the few primitive functions.
I don't think you need to necessarily memorize these transformation names, but writing these types of functions is all I seem to be doing these days, transforming one thing into another line for line.
I feel the code I wrote while on this bandwagon is the hardest to understand for others and for me myself today.
Just write your transformations inline and go work on the next feature.pullAllBy(pluck(things, 'bar').map(compose(xor, lol, rofl)).reduce(differenceWith('id'))The trick is to break that (on the dots) into three const assignments with descriptive names. Practically self-documenting, easily debuggable.
If you don't use the builtin transformations, you'll just end up re-implementing them, poorly. And adding to the cognitive overhead with new concepts. And I have to read your code with a fine-toothed comb to ensure it's really side-effect free. I don't advocate turning everything into a named function as in your example though, short one-off functions should all be inline IMO.
I feel like the functional code you wrote was written to intentionally obfuscate what is being done. For instance, composing a bunch of methods inline doesn't have any utility for it unless you define what compose(xor, lol, rofl) really means.
Ideally this is how it should be written for maximum readability.
The code is written equally well without using pipe operator but the proposal to introduce it is in works [1].things |> pluck('bar') |> map(xor) |> map(lol) |> map(rofl) |> reduce(differenceWith('id')Here is it using lodash (not even lodash-fp), and this is going to do a single for loop when executing because this is lazy.
It's no less readable than the code you'd write using using unfolded transformations._.chain(things) .pluck('bar') .map(_.xor) .map(_.lol) .map(_.rofl) .reduce(_.differenceWith('id')) .value();I agree, but you shouldn't map 3 times when you can map once over the data.
things |> pluck('bar') |> map(compose(rofl, lol, xor)) |> reduce(differenceWith('id')I made my case against the compose in the post.
Composing xor, rofl, lol isn't any better (esp in terms of readability) than it is individual maps.
What would be better is this:
const makeHilarious = compose(rofl, lol, xor); things |> pluck('bar') |> map(makeHilarious) |> reduce(differenceWith('id')This means that to change this code you first have to look for the makeHilarious definition, then the definitions of rofl, lol and xor, then figure out what they all do separately and together, and if you can change them without breaking anything else in your application.
This is code that is easy to understand and safe to change. The maps could be combined into one function body if it's convenient.things .map(thing => x.bar) .map(thing => { // whatever happens in rofl // whatever happens in lol }) .reduce((acc, thing) => { // more stuff }, {})I anticipated that makeHilarious is more than once used method, even if it is not, once you look at it's definition, you understand the intent behind "xor -> rofl -> lol".
It is one thing to quickly be able to understand that the person is doing a xor, then a rofl and then a lol of each element of an array, and a whole another thing to understand what the combination of these three actions over an array means. The python school of "code is read more than it's written" heavily stresses on explaining how easily the code should be understandable the first time someone reads it, but not whether it's easy to reason about or not.
The beauty of declarative style programming isn't to get more readable code immediately, but rather that once you understand the vocabulary, how easy it is for you to understand and reason about the code.
For instance, imagine reading a novel which is written like this:
"After Jack was done from the place where he went to do things for money everyday, he entered an establishment which served drinks that get you inebriated for money. This establishment was one he frequented regularly and preferred it over the others. He asked the man behind the counter for a wheat fermented brewed drink. After putting the drink to his lips and pouring it in to his mouth, he felt a sense of calmness enter his mind. It pushed all the thoughts which occupied his mind away, as he earlier desired before entering this establishment."
As opposed to:
"Jack really needed a drink after hard day at work. He went to his favorite pub, and ordered his favorite beer. After finishing the pint, he finally felt relaxed."
The Python philosophy (which is permeated everywhere in imperative world) is to describe everything in the simplest possible terms just in case there are people who may not understand what work, pub, beer, bartender, and relaxed means. But this just prevents from understanding of the actual purpose of the code.
This is at least the basic philosophy behind not using for loops everywhere.
the nice thing is that you can easily split that line up as much as makes sense, should you decide you need to access some intermediate form of the data. and you can use the variable names as a comment that explains what that chunk of transformations represents.
so someone reading over it can kind of skim down the left side and follow what's happening and scan to the right if they need to understand some part in detail
If only you could still recognize it as a transformation when it is inline
These things map/reduce/compose/flatmap are universal - they're in java, python, c#, Ocaml, Haskell, lisp, ruby, javascript...
Complaining about learning them is like complaining about for loops. They just exist.
Just because some are more familiar with for loops than map doesn't mean that more universal, immutable, expression - based solutions are not widely familiar and easy to understand to programmers coming from other languages.
Readability is subjective.
I've been working with JavaScript since February and I want to give you a hug.
I can't quite understand the difference between endomorphism ("input and output of the transformer must be from the same category") and homomorphism ("structure preserving transformation. We always stay in the same category"). Can someone help?
Homomorphisms are structure-preserving mappings between different types (called “category” in the post, in usual terminology, these would be »objects« in a »category«, though). Endomorphisms are special homomorphisms, mapping from a single type (»object«) to the same type (»object«).
It is indeed very unfortunate that the article conflates terminology.
I believe homomorphism is a subset of endomorphism.
So a function that turns an array into another array of different length would be endomorphic (since it maintains the same type), but not homomorphic since it has a different structure (a different set of keys).
The other way around. A homomorphism is a structure-preserving map between two arbitrary objects, whereas an endomorphism is a homomorphism where the source and target objects coincide.
Endomorphism has less implied structure. Lots of dumb things are endomorphisms. Homomorphism implies "structure preservation" which can make it more specific.
I'm surprised to read this coming from you.
It seems "endomorphism" is used both ways (in, presumably, different contexts).
https://ncatlab.org/nlab/show/endomorphism
I think "endomorphism is a homomorphism ..." is more common, but notably is not the usage in Haskell (https://hackage.haskell.org/package/base/docs/Data-Monoid.ht...)
Of course, Haskell's `Endo` is a type constructor for `Hask`-endomorphisms, but more interesting categories exist. Tell me with a straight face ring and field endomorphisms aren't interesting.
With async/await this can become:
const reduceP = async (fn, identity, listP) => {
const values = await Promise.all(listP)
return values.reduce(fn, identity)
}
The whole thing feels like a synthetic and overcomplicated example, though. In practice I'm sure I'd just write: let total = 0
while (listP.length > 0) {
total += await listP.pop()
}That code does the same thing as https://news.ycombinator.com/item?id=15302465 but not the same thing as the code in the article.
I don't know much about these concepts but isn't `const objToArray = ({ a }) => [a];` losing data, that being the key of the value in the object? I'm asking because it says that "Isomorphism is a pair of transformations between two categories with no data loss".
In any case, this is very helpful, thanks for writing/sharing.
It's a pair of transformations between [A] and { a: A }, not between arbitrary arrays and objects.
As long as you know what the transformation is, you can convert between them without data loss.
EDIT: see paavohtl's comment: I hadn't payed attention to the types, dumb me.
You're right, because for a pair of functions f and g, you have an isomorphism if:
for every x. However, here of coursef(g(x)) == x g(f(x)) == x
is equal to(([a]) => {a})( (({ a }) => [a])({ key: 'data'}) )
The OP doesn't quite master what he's talking about…{ a: 'data' }The pair of functions form an isomorphism. You have these two laws:
forall x. objToArray(arrayToObj(x)) == x forall x. arrayToObj(objToArray(x)) == xYeah, that was my though as well. It seems like what you need is something like:
const objToArray = Object.entries const arrayToObj = (a) => a.reduce((a, [k, v]) => ((a[k]=v), a), {}) arrayToObj(objToArray({ foo: 'bar' })) // { foo: 'bar' }
"Programs must be written for people to read, and only incidentally for machines to execute." - Harold Abelson
The author mentions the library Bluebird, which I think is a fantastic library. The 'mapSeries' method it offers is also very useful when iterating over an array of values that need to be 'promisified' and mapped in the given order. You can even set 'concurrency' as an option, which puts a limit on the concurrent promises that can run (great for reducing API load).
I've written a javascript library to deal with folding and mapping recurring promises (i.e. promises that resolve to a value part of which contains clue to the "next" promise)
With async (it’s just monads!):
listOfPromises.reduce(
async (m, n) => await m + await n,
0,
)