One personal project to add callout component to Obsidian that quickly spiraled out of control. No regrets though.
Around a year ago I switched from Notion to Obsidian for my notekeeping. I did miss quite a few features from Notion, but fortunately Obsidian is extensible, and with what? With good old JavaScript! After installing a bunch of plugins to make it usable for me, I still felt that some parts were missing. Not a problem, I can add it myself, I thought. One thing led to another, and shortly I found myself writing a plugin (I called it Emera) to allow using React components inside Obsidian. A bit later I extended it to support executing arbitrary JS, not just React components.
With that, what I was building is basically a notebook inside Obsidian (somewhat similar to, for example, Jupyter). I stopped using Obsidian shortly after and forgot about this project. But I was revisiting it recently and thought that since it was fun to work on, it would be a shame not to share some more details about its development.
Even though I work for a notebook company now (and I suspect this project did help me to get this job 😛), I’m far from the most knowledgeable person on the topic, and my implementation is far from ideal. Moreover, while preparing this post, I found quite a few new ridiculous bugs. I want this piece to be more of an experience sharing than a polished tutorial on how to make notebooks, so I intentionally didn’t polish the code before writing this article. I still think there is value in sharing this experience — I had a lot of fun making this project, and I hope to share at least some of it with you.
Oh, and here is the GitHub repository if you want to check the code or try the plugin yourself. Because the repository was archived, the plugin was pulled from the Obsidian store, so the only option is to install it from sources.
How it works
Emera allows users to embed JavaScript or React components directly into notes using Markdown code blocks with special language identifiers. For example, adding a block like this will render red text with the name of the current vault.
```emera
<div style="color: red;">This is note inside {app.vault.getName()} valut!</div>
```You can also render a React component inline by using inline code surrounded by backticks:
Current vault name is `emera:<span>{app.vault.getName()}</span>`.Or you can evaluate JS directly using the emjs prefix. The result will be displayed on the page instead of the code.
Current vault name is `emjs:app.vault.getName()`.And lastly, you can use a multi-line emjs block. Unlike emera blocks and inline code, this type of block can export values that will be available to subsequent code on the page.
```emjs
export const username = 'Adam';
export const age = 42;
// You can now use {username} and {age} in other blocks.
// You can also use await
export const userDetails = await fetchUserDetails(username);
```To avoid repeating the same code over and over, reusable code and components can be stored in normal JavaScript (or TypeScript) files, and Emera will load them on start and make any exported variable from these files available to code on pages. In Emera, this is called a “user module”.
Sounds complicated? Just watch the video where I show how everything works from a user perspective.
The plugin works in both reading and live preview modes, as well as on mobile. There are also a couple of quality-of-life things like built-in storage that can be used by React components or code blocks or special shorthand for components that render Markdown content. But today we’ll be mostly talking about core features.
Before we go into technical details, I also wanted to mention some of the limitations Emera has. We’ll go deeper into each of them in relevant sections of the article, but in short:
- You can’t use external modules from NPM (you can import from URL though).
- You can’t use built-in Node modules like
fs. - Emera doesn’t provide a proper reactive API wrapper for Obsidian. E.g., you can’t refresh your component when the page’s certain frontmatter property changes.
What happens under the hood
All the work happens in one of two stages: either on plugin load or when rendering the page. On plugin load, Emera will register the Markdown post processor and CodeMirror extension, populate the root scope with global variables, load the user module, bundle and transpile it, and store its exports in the root scope.
Depending on the mode (reading or live preview), Obsidian will invoke either the Markdown post processor or CodeMirror extension, which will handle execution and rendering of Emera blocks on the page. Processing is similar for both nodes. Emera will create a scope for the page (descendant of root scope), collect all blocks from the page, and then sequentially process them. Each block will receive its own scope, which is descendant of the previous JS block’s scope (or page scope if there are no JS blocks), and its code will be transpiled and executed. If there are preceding async JS blocks that didn’t finish execution yet, the current block’s execution will be delayed until preceding blocks are done. Depending on the block type, the result will be either rendered directly or exported variables will be added to the scope and made available to following blocks.
That’s how Emera works in very broad strokes. I know it’s very packed and might be hard to understand, but don’t worry. We’ll go into each part of this process in much more detail now.
Bundling and transpilation
Emera is not the first plugin to allow using React in Obsidian. There is obsidian-react-components, which was early inspiration for Emera. However, that plugin has a major inconvenience: you have to write your components directly in Markdown files. I wanted to be able to write components (and any other reusable code) in proper JS/TS files, using my favorite editor, with the ability to organize code in multiple files.
There are probably multiple ways to achieve that, but a noticeable limitation Emera had to work around is… browser. Obsidian for desktop works on top of Electron, which gives you access to both a browser-like environment and Node APIs. But on mobile, it’s only a browser. And Emera absolutely had to work on mobile.
These days browsers support native ESM imports, but because of how Obsidian plugins are structured, Emera couldn’t use them. Instead, all user code files had to be combined into one big JavaScript string and then dynamically executed inside the plugin. This is exactly what bundling does: it starts with an entry point file and collects all other files it depends on and bundles them together. There is no shortage of bundlers that can do this: there is, of course, Webpack, there is esbuild, parcel, rspack, and many others. But don’t forget, our plugin should work inside a browser, and bundling JS inside a browser environment isn’t that common of a use case, so most bundlers don’t support it. Fortunately for me, there is Rollup, which works in a browser. All I had to do to make it work for Emera was to write a small plugin to glue Obsidian vault filesystem and Rollup together. There also were two small plugins to glue Babel to Rollup and also allow importing CSS files directly, but they are quite simple.
Bundling would be enough if we worked only with JavaScript files, but since Emera has to handle JSX and TypeScript, we need an extra step during bundling to transform them into normal JS code, which can be executed in a browser. For this I used Babel. Not the most popular choice these days, but it works in a browser (unlike many other similar tools!) and gets the job done, that’s all I care about.
JSX and TypeScript are handled by Babel’s built-in plugins, but Emera also adds 2 custom Babel plugins: one for rewriting access to unknown variables to go through scope abstraction and one to rewrite imports. We’ll cover scopes soon, for now, let’s take a look at import rewriter.
When you use import in code, you could be importing three different kinds of modules, all of which work very differently in Emera. You could be importing local files:
import { helperFunc } from "./helpers";In this case, Emera has to load that file and bundle it together with code that imports it. This is handled by Rollup and our custom virtual FS plugin.
You could be importing a remote module:
import Confetti from "https://cdn.skypack.dev/canvas-confetti";In this case the browser can handle the import, so we don’t need to do anything.
While writing this article, I realized that Rollup in Emera is misconfigured (missing the
external property that excludes HTTP imports from bundling)
and can’t handle remote imports in the user module. But you can still use them in code blocks in Obsidian.
And finally, you could be importing an installed module:
import Confetti from "canvas-confetti";This one is troublesome. The browser can’t handle it, so it’s on us: we have to bundle it. However, we can’t do that. I mean, technically we could, but that would require installing the module directly into your vault (hello, 999 GB node_modules folder!), which will quickly become a mess. Instead, in Emera, I decided to provide only a couple of built-in modules, which are bundled with the plugin itself and then made available to user code. More specifically, it’s
emera– utility function and APIs provided by Emera.reactandreact-dom– critical to make the components idea work at all.obsidian– this is an Obisdian module available to plugins, Emera just makes it available to user code as well.jotaiandjotai/utils– state management library, see Jotai docs.framer-motion– animation library, see Motion docs.
But we still have to do something with imports (remember, the browser can’t handle them on its own). That’s where a custom Babel plugin comes in. Babel parses JavaScript into an abstract syntax tree (AST), passes it through a bunch of plugins that can modify it, and then compiles the resulting AST back into code. By implementing a custom plugin, we can rewrite those imports from
import { motion } from "framer-motion";into something like
const { motion } = window._emeraModules["framer-motion"];_emeraModules global variable is provided by the Emera plugin, it’s just a simple object with exposed modules, something like this:
import * as obsidian from "obsidian";
import * as react from "react";
import * as fm from "framer-motion";
import * as jotai from "jotai";
import * as jotaiUtils from "jotai/utils";
import * as reactDom from "react-dom";
import * as jsxRuntime from "react/jsx-runtime";
import * as emera from "./emera-module";
window["_emeraModules"] = new Proxy(
{
emera,
react,
obsidian,
jotai,
"jotai/utils": jotaiUtils,
"react/jsx-runtime": jsxRuntime,
"react-dom": reactDom,
"framer-motion": fm,
},
{
get(target, p: string, receiver) {
const module = Reflect.get(target, p, receiver);
if (!module) {
// Handle import of unknown module
throw new Error(
`You're trying to import module ${p}, but it isn't available.` +
"You can only use small number of pre-selected modules with Emera, refer to " +
"the documentation to see which modules are available",
);
}
return module;
},
},
);I won’t go into details about working with AST (it’s not that interesting, really), all I want to say is that it was tough. I couldn’t find proper docs for it, and AI wasn’t good enough at that time to handle it for me either, so I had to go in small steps, through trial and error, inspecting what AST different code outputs and updating the Babel plugin to handle it all properly. But I still think it’s very cool to be able to juggle code as you want to both provide better DX for users (they can use familiar import instead of using this window._emeraModules contraption) and make code compatible with limitations of the platform at the same time.
Executing the code
Once code is transpiled and bundled, it has to be executed. There are at least a couple of ways to execute arbitrary code in a browser environment and get results back:
- Old, but not good
eval. Functionconstructor.importfrom Data URL.- And probably a bunch of other more obscure ways.
Please note that executing arbitrary code safely is a totally different task, and I didn’t even try to solve it. Emera is targeted at developers (i.e., people who can read the code and know not to run random untrusted code), so it will happily execute anything you throw at it.
For our use case, both the eval and Function constructor are not ideal because of how they work with scope. When you execute code with them, it gets access to the parent scope, which might lead to weird bugs.
const x = 42;
const result = eval(`x * 2`); // = 84
const result2 = new Function("return x * 2")(); // = 84Instead of failing, this code will be executed successfully because x is present in the parent scope. In the context of Emera, parent scope will be the internals of the plugin, and it doesn’t make much sense for user code to have access to those internals. It might lead to weird bugs that will be hard to debug. In the case of eval it also can pollute the parent scope by, for example, defining functions. In this case, user code has the potential to mess with plugin internals and lead to even more obscure bugs.
eval("function yolo() { return 42 }");
yolo(); // This will actually return 42Moreover, both of those approaches don’t work with ESM exports and imports.
const moduleCode = `
export const answer = 42;
`;
eval(moduleCode); // Uncaught SyntaxError: Unexpected token 'export'And an ESM export is a key mechanism in both loading user module and sharing variables between blocks on the page. To support it, we have to use the import function. It’s intended for importing modules (remote or local), but if we encode our code in a data URL, the browser will happily treat it as a module and import it.
const code = `
const privateVar = 'foo';
export const publicVar = 'bar';
`;
const encodedCode = `data:text/javascript;charset=utf-8,${encodeURIComponent(code)}`;
const module = await import(encodedCode);
module.publicVar; // => 'bar'
module.privateVar; // => undefinedWith this we also get better scoping (the module doesn’t have access to parent scope and can’t pollute it), support for import-ing of remote code, and top level await.
A minor caveat is that the browser caches modules, so if you try to execute the same code twice, the browser won’t do it but will return the cached module instead. But we can easily work around it by adding some (meaningless) randomness to the code.
export const importFromString = (code: string, ignoreCache = true) => {
if (ignoreCache) {
code = `// Cache buster: ${Math.random()}\n\n` + code;
}
const encodedCode = `data:text/javascript;charset=utf-8,${encodeURIComponent(code)}`;
return import(encodedCode);
};Scopes
In this section (and pretty much the whole article, except the previous section), “scope” refers to abstraction inside Emera. While conceptually it’s very similar to scope in programming languages (and JS specifically), which I mention in the previous section of the article, it’s technically a separate thing.
In Emera, scope is an abstraction used to share variables between blocks. Implemented as a ScopeNode class, they can be arranged in a tree structure. On top there is always root scope in which Emera puts global variables (like the app object giving access to Obsidian API) and exported members of user module. Then there is page scope, which contains objects like frontmatter and file. Each page has its own scope, but they all descend from root scope and thus have access to everything in root scope. Then each block on a page gets its own scope, which descends either from page scope or from the previous JS block’s scope. This allows us to keep track of exported variables and only give access to variables that were defined in blocks before it.
We could omit this system and just dump everything in window. But doing so, we’re risking polluting the global namespace, which might cause conflicts with other plugins or Obsidian itself. It also makes the code less idempotent. Consider a situation where you have two blocks: the first one tried to use variable foo, but it’s defined only in the second block. On first execution, the first block will fail with an error. But on the second execution (this is mainly the case for live preview mode), it will work, since we would have foo defined now. In my opinion this shouldn’t happen (and that’s one of the things I really dislike in Jupyter, tbh), so I wanted to engineer Emera to have very explicit sharing of variables between blocks.
But what exactly is ScopeNode? Nothing special, really. It’s just an object with some extra whistles for convenience. Here is a simplified implementation:
export class ScopeNode {
public parent: ScopeNode | null = null;
public children: ScopeNode[] = [];
public scope: Record<string, any>;
constructor(public id: string) {
this.reset();
}
get(prop: string): any {
return this.scope[prop];
}
has(prop: string): boolean {
if (Object.hasOwn(this.scope, prop)) return true;
if (this.parent) return this.parent.has(prop);
return false;
}
set(prop: string, val: any) {
this.scope[prop] = val;
}
reset() {
this.scope = new Proxy({}, {
get: (target, prop: string, receiver) => {
if (Object.hasOwn(target, prop)) {
return Reflect.get(target, prop, receiver);
}
if (this.parent) {
return this.parent.scope[prop];
}
throw new Error(`'${prop}' is not in scope`);
},
});
}
addChild(child: ScopeNode) {
if (child.parent) {
throw new Error('scope is already in tree');
}
this.children.push(child);
child.parent = this;
}
dispose() {
if (this.parent) {
this.parent.children.splice(this.parent.children.indexOf(this), 1);
this.parent = null;
}
this.children.forEach(child => child.dispose());
this.children = [];
}
}As you can see, this is a lightweight wrapper around a plain scope object to allow building a tree structure (parent and children props and related methods) and allow retrieving properties from the current scope, or if a variable is not found in the current scope — probing parent scopes.
The key part of this system is how exactly the user’s code gets access to variables stored in the scope. To do this, we use the previously mentioned custom Babel plugin. As you might remember, Babel plugins modify JavaScript AST before it’s converted back into code. This specific plugin finds all identifiers, checks if they should be replaced, and replaces them with scope getters. So, for example, if a user has a code block like this
export const userFirstName = user.firstName;Our plugin will rewrite it into something like this:
export const userFirstName = (blockScope.has("user") ? blockScope.get("user") : user).firstName;The plugin has quite a lot of checks for which identifiers should be replaced (and as I said, it was a pain in the ass to write all the checks without proper docs and understanding how it should work 😭). For example, Emera replaces only the first identifier in the chain (e.g., firstName in the example above remains untouched), it skips identifiers that are part of import, export, re-export, or object destructing. It also skips identifiers that are already part of scope.has('x') ? scope.get('x') : x construction (or else we risk getting into an infinite loop 😛). Emera also tries to not touch known global variables (like window), but since there are too many global variables to check (some of which might be defined by other plugins), we can’t handle them all with a whitelist. Because of this, Emera uses this scope.has('x') ? scope.get('x') : x construction instead of just calling scope.get('x'). If identifier lookup fails, we’ll just fall back to the original identifier and let the browser deal with it.
Not the code I’m most proud of, but you can check it out if you’re curious about details.
Processing blocks on page
Finally, we’re getting to the glue that holds everything together.
In Obsidian, there are two modes: reading and editing. In editing mode you can optionally enable live preview, which will render Markdown directly in the editor. Emera works only in reading mode and editing mode with live preview enabled (without it, the user just sees the source code of Emera blocks).
And from a plugin perspective, those are two totally separate modes. In reading mode, we work with Markdown rendered into HTML elements. We register a Markdown post processor, which will be called by Obsidian every time any Markdown content is rendered so we can modify it. In this post processor Emera goes over every <code> element in rendered content, checks its language (or prefix for inline code), and puts it, along with relevant context, into a queue for processing. We use queue and delay processing for a couple of milliseconds to avoid rendering (potentially complex) components too often if the post processor will be called a couple of times in a row. We’ll cover how processing happens in a minute.
For live preview, instead of modifying HTML elements, we have to use the CodeMirror API. CodeMirror is an editor used in Obsidian, and it’s very customizable. It has a very flexible plugins and widgets system, which allows you to modify the editor’s behavior and what’s being rendered to the user. Tbh, its plugin system is still somewhat above my head. I somehow got it working and am very happy with that 😇. What’s impressive is that CodeMirror was developed and is maintained largely by a single person, Marijn Haverbeke (also author of ProseMirror editor and Acorn library!).
For CodeMirror, Emera registers a custom state field. It’s an entity that can hold some state and update it in response to changes to editor state. It can also provide decorations to the editor, which allows us to render our components directly in the editor in place of original code blocks.
Every time the editor state updates, our state field checks if the editor is in live preview mode, whether we can get details about the Markdown file associated with this editor, and then collects which blocks need to be processed. Remember AST from the section about transpilation? Here we work with AST once again (this time it’s not JavaScript AST but CodeMirror AST), we iterate over it to find all inline code and code blocks that should be processed, extract code from them, and create a widget for each of them.
Widgets are another CodeMirror entity. They allow us to render arbitrary content inside the editor. Emera creates 4 different types of widgets (inline JS, inline JSX, block JSX, and block JS). Each widget receives code and other relevant context (e.g., scope, current file, etc.) and works as a final piece of glue code between CodeMirror and our code for processing blocks (which is shared for both reading and live preview modes).
At this point both code paths merge into one. Both the Markdown post processor and CodeMirror widgets invoke the processInlineJs function (or similar, depending on block type), passing it the source code, HTML element to render result in, and aforementioned context.
In this function, depending on block type, code can be slightly modified. For example, inline JS is transformed into a module with a default export:
// So code like this
app.vault.getName();
// becomes
export default () => app.vault.getName();Since we use import to execute the code, we need to make sure the code exports its results, or else we won’t be able to get them.
For JSX, Emera creates a new component from provided JSX:
// This
<Greeting name={user.name} />;
// Is transformed into
export default () => {
return (
<>
<Greeting name={user.name} />
</>
);
};Finally, JS blocks are processed as is, as Emera expects the user to export variables that should be shared with other blocks manually.
Then this code is transpiled (but not bundled!), executed, and, depending on the type of block, its result is either rendered directly or, in the case of a JS block, it’s stored in this block’s scope and a generic placeholder is rendered. If there was any error while executing the code, Emera will render a placeholder with details and a stack trace.
Async blocks
Originally, Emera could only render JSX blocks, and since they can’t affect each other, they all could be rendered in parallel. But with the introduction of JS blocks, this was no longer the case. A JSX block could depend on a variable exported by one of the previous blocks. These blocks could be async, so before rendering JSX blocks, we need to wait for the preceding JS block(s) to finish execution.
My first thought was to orchestrate it like this:
async function processBlocks(blocks) {
for (const block of blocks) {
if (block.type === "js-block") {
// Wait for block to be processed before switching to next one
await processIndividualBlock(block);
} else {
// Fire and forget
processIndividualBlock(block);
}
}
}This would work in simpler cases, but it’s prone to race conditions if processBlocks called again before the previous call had a chance to process all blocks. In this case, the earlier call could overwrite the results of a recent call if it finished earlier.
Ideally, for each pass we would build a dependency graph where each node represents a block. JS blocks (one of which can produce values to be used by other code) will be internal nodes from which there could stem more nodes. Other code types will be leaf nodes, i.e., the end of a branch (as no blocks depend on them). We don’t need to be very granular here and try to deduce which block provides the particular variable required to execute the current block. We can just assume that if code has JS blocks preceding it on page, it depends on them. And then we would execute the block only if all its parents in this graph finished execution.
And if you recall the content of one of the previous sections, you’ll notice that we already have a tree structure following this pattern. I’m, of course, speaking about scope trees. So my natural choice was to reuse it and extend it to support blocking.
When a JS block starts execution, it receives a scope to read values from and a scope to write exported values into. The idea was to extend the ScopeNode class to have a method that will block and unblock the current scope (it’s just setting a boolean flag on the instance). It will also have methods to check if the scope is blocked and provide a method to wait for the scope to unblock. Then the JS code block could mark scope as blocked on start of execution and unblock it after putting exported variables in there. Any child block would wait for the scope chain to unblock before executing, ensuring all values required by current code are already available.
Now, do I think this is a good solution? Ehhh, well, somewhat. It does the trick, but it’s bad separation of concerns. Yes, both the scope system and execution orchestration use the same tree structure, but it’s too weak of an argument to lump execution orchestration under the ScopeNode tree. A better option will be to have a separate DependencyGraph abstraction that could keep track of dependencies between blocks, and then each node of this graph would have separate scope and executionManager properties.
Sharing graph implementation between both the scope system and orchestration would also allow unifying utility functions. E.g., when you call scope.has('someKey') and the variable is not found in the current scope, it has to walk the tree up until it finds it. Similarly, when you want to check if a node can be executed, you need to check that all parents finished execution, which also involves walking the tree up. Sharing those utility functions will allow us to reduce the total amount of code we have.
Rendering
And we’re finally on the last step of the pipeline. This is probably the shortest part of it (and not really interesting, tbh). But for the sake of a complete picture, I’m including it. Once code is executed, we have to render its results. This could be a React component or a simple string value.
In the case of inline JS, the resulting value is stringified and wrapped in <span> with simple styles to denote that this content is dynamic. There is simple indication while the block is executing, and in case of error, it’s stringified and used as content for the <span>.
For JSX, the process is a bit more complex. As you might remember, we transform JSX code into a component before transpiling and executing it. This component is not rendered directly, it’s wrapped in a context provider and error boundary.
Context provides access to the Emera API (e.g., storage). Currently, this API is not dependent on the block, so there is no real need to have it as a context, as it just mirrors API available in the emera module. Yes, this is the case for optimizing for the future (which never happened 🫢).
At least the error boundary is here for a reason! By default, if an error happens while rendering a React component, it will be propagated up the component tree, and if it reaches the root, it just makes everything disappear. Yes, there is an error message in the console, but do you know how to open the console in Obsidian? Even if you do, it’s bad UX. So Emera wraps the user component into an error boundary — special type of component that can catch those propagating errors and render a proper alert with a stack trace to make debugging a bit easier.
Small tricks
There are some more interesting bits that didn’t fit well in other sections, but I still wanted to brag a little.
Syntax highlight
Emera works with JS and JSX, which are both widely supported languages. If you create a block with JS or JSX code in Obsidian, you’ll get nice syntax highlighting. But since Emera uses its own language identifiers like emera and emjs, content of blocks won’t have any syntax highlighting.
Fortunately, CodeMirror (which does the syntax highlighting) is very extensible, so we can register support for custom languages (which in CodeMirror terms are called "modes"). Moreover, since our custom languages are essentially JS/JSX in a trench coat, we can reuse existing CodeMirror modes like this:
window.CodeMirror.defineMode("emera", (config) => window.CodeMirror.getMode(config, "jsx"));
window.CodeMirror.defineMode("emjs", (config) => window.CodeMirror.getMode(config, "js"));Shorthand syntax for Markdown blocks
One of the original motivation factors for making Emera was the lack of “typography” components in Obsidian. If I recall correctly, I really wanted a “callout” component. Well, as you already saw, Emera has grown far past the simple “callout” component, but I still wanted to accommodate this use case as much as possible.
For components like Callout I don’t need support of all JSX features, and I don’t need the component to accept any props. Instead, I want to write Markdown as usual (or as close to usual as possible), and it’s for the machine to figure out the rest. To cover for this, I added another block “language” to Emera: in addition to emera (JSX), emjs (JS blocks), there is emmd which stands for “Emera Markdown.” When using this language, you need to specify one of the components like this (Callout should be exported from your user module).
```emmd:Callout
Content of callout goes **here**.
```In this case, Emera will parse the content of the block as plain text (and not as JSX) and pass it into the specified component (which then can render it as Markdown). Fortunately, Obsidian provides an API to render Markdown into HTML, which means all your other plugins that alter rendered content will work here too.
The tricky part here was to make Obsidian treat the language specifier the same way for both reading and live preview modes. I tried different options for shorthand syntax, but in the end only the colon did the trick.
Originally, I wanted to use syntax like this:
```emmd Callout type="info"
Content of callout goes **here**.
```This would allow us to even pass props to the component. However, my grand plans were quickly shattered by Obsidian API implementation. When rendering Markdown, it will discard everything in the code block language specifier after the first space, so there was no way to get “arguments”.
But why a colon? Why not, for example, use the component name directly? Or prefixed with Em for example?
```EmCallout
Content of callout goes **here**.
```That all has to do with syntax highlighting. You see, for CodeMirror, EmCallout is a separate language. So Emera would need to register each possible Em+Component combination to get Markdown syntax highlighted inside. The problem is, we can’t even know what in the user module is a React component and what is just a function/class, so we’ll need to register Em+Component mode for every exported function or class, which is messy. EmCallout also doesn’t feel good to use in my opinion.
emmd:Callout on the other hand is parsed like emmd language by CodeMirror (so we need to register a new mode only once), it’s preserved by Obsidian Markdown renderer (and thus Emera can handle it properly in reading mode), and it just looks better than other options.
Storage
Emera provides a simple API to store data. It’s intended to persist some component state between app launches. Its content is backed up into a JSON file in the user module folder. This way, if the user has any sort of sync enabled, it will be copied to other devices as well.
Storage is implemented as a simple store with functions like .get(key) and .set(key, value). Reactive bindings for React are implemented with Jotai library. I quite like the state model of atoms, in my opinion, they fit Emera much better than one big centralized store like Redux or Zustand.
This implementation is far from good, though (not even speaking about ideal). I’m mostly covering it to highlight the issues. First of all, there is no need for Jotai at all. It provides nice atom primitive, but its minimal version can be easily implemented from scratch and then connected to React using useSyncExternalStore.
Self-made atoms, of course, will have fewer features, as Jotai includes quite a few batteries. However, Emera makes little to no use of them anyway, so currently it’s just bundle bloat. Using useSyncExternalStore will force React to opt out of concurrent rendering, but Emera doesn’t use it anyway. It could be useful in more complex React apps, but for Emera’s use case, concurrent rendering (inside a single root) didn’t make much sense.
Second, and somewhat embarrassing, issue is that storage is not synced properly between the imperative (storage.set) and reactive (atoms) API. If you call storeage.set('key', someValue), it won’t trigger re-render of the component that consumes this storage item through the useStorage('key') hook. This is simply an oversight during implementation, but nevertheless a big one.
If I were to remake/update Emera, this definitely would be the part with the most changes.
What could be done better
Through this article I commented on some of the questionable decisions already. In this section I’d like to go over broader issues.
Performance
The biggest one, in my opinion, is the size and speed of Emera. It works fine on the average laptop, but I can’t say the same about mobile. Due to its big bundle size, Emera significantly slows down the start of the Obsidian mobile app. The app needs to parse and execute the whole bundle before it can start loading the next plugin. Major contributors to Emera’s bundle size are Babel and Rollup. Those are HUGE. And even though Emera uses only a fraction of its features, we still have to bundle the whole thing.
Moreover, transpiling JS (with Babel) and bundling it (with Rollup) are relatively slow operations too. On lower-spec devices (i.e., phones), it can introduce small but noticeable delays, which might make the app look laggy.
Because of how critical transpilation and bundling are for Emera, there is little space for optimization. We can’t abandon it (without a big hit on user/developer experience), but we can replace it with something more performant. Babel and Rollup are written in JS, which is excellent if you want to run them in a browser. But that also significantly limits their efficiency. Fortunately, WebAssembly support is very strong these days, and some of the bundlers/transpilers can be compiled to WASM. For example, esbuild or swc. These projects can cover for both Babel and Rollup, theoretically giving us some boost in performance.
But this approach is not without trade-offs either. Obsidian has a very rigid structure for plugins. Each plugin should be bundled into a single main.js file. You can’t distribute a .wasm file alongside your code and then load it at runtime. The only option I found (credits to the Templater plugin) is to embed WebAssembly directly into the main.js file as a base64 string. This string then needs to be parsed on plugin load. The WASM build of esbuild takes more than 15 MB in base64, which on its own can significantly slow down plugin load.
Poor developer experience with TypeScript
I love TypeScript! And because of this, I wanted to make it possible to use TS/TSX in Emera. I had a grand picture of how it would be a breeze to write components exactly how I’m used to and then easily use them in Obsidian. However, without declarations (supplied with modules or as separate @types/ modules), TypeScript isn’t much better than JS. The opposite, actually: it will yell at you that you’re using an unknown function or prop just because it can’t find type declarations for it.
And that was the exact problem with Emera. Since the user module is located inside the vault, we can’t do normal npm install @types/react. That would download a lot of files that will bloat the vault, potentially slow down Obsidian, and mess up syncing. The only workaround to get type checking and autocomplete in the editor working, that I can think of, is installing required modules globally. This is an OK solution, a bit messy in my opinion (you have to hardcode the path to global node_modules in your tsconfig.json), but it works. To make it complete, I’d have to publish the emera package to NPM with proper types so TypeScript can use them (the module itself is provided by Emera).
Not reactive enough
Emera is centered around React components. But at the same time, it provides a very limited reactive API. This is not an architectural problem, as everything for it is already in place. It’s rather something I didn’t have enough time to implement (before I switched from Obsidian and thus didn’t have a need for Emera anymore).
Emera does some effort to re-render components when, for example, current page frontmatter values change. But it lacks granularity, and other Obsidian APIs don’t have reactive counterparts at all. For example, it could be useful when implementing components to be able to get a certain setting’s value and rerender automatically when it changes. Or use appearance preferences like colors, fonts, or zoom level in components (you can use provided CSS variables, but it’s still not reactive and a bit limiting).
***
While writing this article, and especially while preparing the demo video, I once again was fascinated by how many opportunities end-user programming can unlock in apps we use daily. Maybe because I’m already quite familiar with programming, but being able to just make custom components to cover my specific needs felt so empowering. I wish more apps allowed the degree of extensibility Obsidian provides.
Even though writing this article felt embarrassing sometimes, I powered through it. Just so you can read how I royally fucked up storage implementation. Joking-joking. It was a really fun project to work on, and it was almost as fun to revisit it while writing this article. I hope you found it interesting, or at least entertaining. Now go and build something fun yourself 😛.
***
How was this post? Did you find it funny, interesting, or maybe useful? If you have a burning desire to share your thoughts with me, please do! Send me a letter or find me on socials. I love receiving positive feedback (negative one I'm very well capable of producing myself). You can also subscribe to my RSS feed to know about new posts.
I write just because I enjoy the process. But if you have a burning desire to support me with some money, who am I to decline? I'll probably spend it on coffee to write even more. Maybe. Anyway, you can find the details here.