Integration with Firebase Realtime Database
Note:
- If you’ve read the previous article, you’ll may notice that the actions structure is a bit different, that because we’ve normalize it with a standard structure.
- The attached code is type-less. We use flow as a static type checker.
Update and remove data
Update and remove data is relatively simple.
Let’s have a look at the update example:
Actions:
Specific wrapper function:
As before, we have the same 3 pure general actions.
The wrapper function updateUserContactRequested in this example, receives the relevant parameters, and just call the action creator of FIREBASE_UPDATE_REQUESTED with a payload object (that contains the parameters) and the meta type for the saga processing.
As one can tell, no async operations were employed!
The reducer stays the same as well:
And now to the saga, where the magic happens:
The generator function watchUpdateRequested is the entry point. It always waits for FIREBASE_UPDATE_REQUESTED action. When it gets an FIREBASE_UPDATE_REQUESTED action, it calls to the appropriate getter (by the meta type), to retrieve the relevant updates, and fork updateItmes Generator with the updates.updateItmes , which runs on a different task performs the asynchronous update operation and dispatch a FULFILLED or REJECTED action for the reducer.
Pay attention that we used fork effect to perform the asynchronous operations in a different task (since fork is a non-blocking effect), in order to let watchUpdateRequested to keep listening to other requests.
If we wouldn’t do so (use call effect for example), we would loose Update requests.
The principle of remove is more of the same. You can take a look at remove patterns in our sample repository.
Tests
An awesome benefit of employing redux-saga, is how straight forward and easy the tests are! Let’s see an example of a test:
The action’s wrapper function test is really straightforward:
We only make sure that the wrapper creates the appropriate action with the appropriate payload.
Now, let’s take a look at the sagas tests:
We can see how the use of Generators and Effect simplify the tests.
Let’s take watchUpdateRequested test as an example:
Every time we perform generator.next() we get to the next yield statement and verify what we’ve got.
As you can see, the generator lets us push values into it, what’s makes the tests really easy.
In addition the separation between Effect creation and Effect execution makes it possible to examine what Effect object was returned, without executing the effect itself.
For example, when we want to ensure that fork of updateItems was called, all we have to check is that fork(sagas.updateItems, updates, action.meta.type) was returned from the generator, but updateItems itself is not run. In order to test updateItems we’ll have to perform a specific test for it.
Fetch and listen to data
Let’s recall the problems we’ve dealt with in the previous article:
- Where do you set the listener?
- Where do you unset the listener?
- Where do we maintain all these open listeners and how we prevent duplicate listeners.
- Doing .on(‘value’, callback) gets all messages on every change (even to a single message) which is wasteful.
- How do we report progress while fetching the data?
In this article we’re going to tackle problems 1–4 in a different way than we previously did.
Regarding problem number 5: it stays almost the same — since the reducers were almost not affected.
The only exception is that the Firebase Database reference, which in the previous solution was saved in the store, was taken out from there and is treated in a different way.
Let’s take a look at the listener actions:
Notice that all the action creators are “pure”, which means that they return only simple JavaScript object with structure of actions and not functions.
As before, all the actions are generic and in case we want to listen to a specific path in the database, we simply wrap it with a function, and this time, we also specify the relevant path at the wrapper function, and pass it to the generic action creator as a Firebase Databbase reference:
To solution to problems 1–4 lies within the sagas.
Before showing our solution, I’d like to mention that the solution uses a redux-saga abstraction called Event channels. Event channels allow communication with external event sources, queuing their external events and translate them into objects.
You can read more about saga channels here.
Listener sagas:
Similar to what we’ve seen in the update watcher example, everything starts with the generator function watchListener .
For each path that we want to listen to (and is represented by a meta type) we initialize an instance of watchListener with the appropriate meta type.
For example: watchListener is initialized with userContacts meta type in the rootSaga Generator function (where all the listeners should be initialized):
The function rootSaga is registered as the “run list” of the saga middleware.
The watchListener generator function is in fact a small state machine. This can promise us a clear flow and a consistent state of the listener. In addition, we can be sure that there are no duplicate instances of the same listeners.
Here is watchListener state machine diagram, where M is the meta type which watchListener was initialized with:
When watchListener receives a Listen Request with meta type M, it runs getDataAndListenToChannel asynchronously (using the fork effect).
Then it’s waiting for a listener removal request or a new listener request of type M.
When it gets such request, it cancels the task (using acancel effect) and dispatches a Listener Removed action (using aput effect) to update the state (via the reducer).
Now, say it got a new LISTEN request, watchListener(M) will fork the getDataAndListenToChannel saga once again.
This way, we avoid duplicate listeners, and in case of a new listener request while listening, we’d just restart the listener.
Until now we took care of problems 1,2,3.
getDataAndListenToChannel retrieves the data in 2 stages:
- Reading the data from the path once
- Listening to the path using saga
channelin order to get updates on its children.
Notice that the channel is created first, then the data is read once and then, before it start to listen to the channel, it cleans the channel (by a flush effect) to avoid duplicate data.
In case of an error, or a task cancellation, it will get to the finally block and will close the channel.
This solves problem 4.
The channel itself is created using createEventChannel function. We configured its buffer to expand in case of multiple messages in the queue to avoid of loosing events.