Chris Nicholas | Developer experience at Liveblocks

9 min read Original article ↗

Building an AI toolbar for text editors

·

3,444 views

1 reading now

I've been experimenting with a floating AI toolbar, designed for use in text editors. When text is selected, an AI button will appear, which opens a dialog for modifying your selection and generating new text. Try a demo of the UI below.

An interactive example of the animated AI powered toolbar. A popover is hovering below a highlighted word, and you can click on it to open the toolbar, and give the AI different prompts.

I’ve set up the toolbar animations using Framer Motion, which allows you to easily animate React components.

Text distortion

When the height of a motion component changes, every element inside is temporarily stretched into its new position. For example, this component will animate and slightly distort when the contents of <p> grows taller.

<motion.div layout={true} className="border bg-gray-500">
  <p>{/* Content temporarily stretches when the height changes */}</p>
</motion.div>

This generally isn’t a problem, however when text is stretched during these animations, it creates a poor reading experience. And because we’re generating text word-by-word with AI, this will occur—try moving the sliders below to see the effect.

An interactive element that lets you changes how many lines of text are displayed. It's not a very natural text animation.

Avoiding the distortion

To avoid this distortion, we can make the child a motion component, setting its layout value to "position". This tells it to only animate its position, not its size.

<motion.div layout={true} className="border bg-gray-500">
  <motion.p layout="position">
    {/* Content does not stretch on height changes */}
  </motion.p>
</motion.div>

By doing this we can keep the animation without stretching the text—here’s what it looks like.

An interactive element that lets you changes how many lines of text are displayed. It's a much more natural text animation.

This experience is preferable because the user is most likely reading this text as it animates, watching a response being streamed in from AI. Without the distortion it’s far easier to read.

I’ve set up AI generation using the Vercel AI SDK. It’s very easy to get started—once your Open AI API key is set up, we can create a server action named continueConversation in Next.js, exactly like the following snippet.

"use server";
 
import { CoreMessage, streamText } from "ai";
import { createStreamableValue } from "ai/rsc";
import { openai } from "@ai-sdk/openai";
 
// Send messages to AI and stream a result back
export async function continueConversation(messages: CoreMessage[]) {
  const result = await streamText({
    model: openai("gpt-4o"),
    messages,
  });
 
  const stream = createStreamableValue(result.textStream);
  return stream.value;
}

Calling this action from the client, passing our prompt to content, will return a stream of the results. Use the provided readStreamableValue function with for await to read the result. You can see the expected console messages below the snippet.

import { readStreamableValue } from "ai/rsc";
import { continueConversation } from "../actions/ai";
 
// Send to AI and stream in results
const result = await continueConversation({
  role: "user",
  content: "Is a tomato a fruit or a vegetable?",
});
 
// `content` is the entire string so far
for await (const content of readStreamableValue(result)) {
  console.log(content);
}
Tomatoes
Tomatoes are a
Tomatoes are a fruit despite
Tomatoes are a fruit despite their culinary
Tomatoes are a fruit despite their culinary usage.

Giving AI a memory

We can make this more useful by tracking every message received on the client, and passing it back to the AI on every prompt. In this way the AI will have memory of your previous prompts and its replies.

import { CoreMessage } from "ai";
import { readStreamableValue } from "ai/rsc";
import { continueConversation } from "../actions/ai";
 
let messages = [];
 
async function queryAi(prompt: string) {
  // Add the new prompt to the existing messages, so it remembers
  const newMessages: CoreMessage[] = [
    ...messages,
    { content: prompt, role: "user" },
  ];
 
  // Send to AI and stream in results
  const result = await continueConversation(newMessages);
 
  // `content` is the entire string so far
  for await (const content of readStreamableValue(result)) {
    messages = [...newMessages, { role: "assistant", content }];
  }
}

Creating a React component

We can convert our AI setup into React code and place it inside a component, creating an input to submit the user’s prompt, and returning the previous result above it.

Tomatoes are a fruit despite their culinary usage.

"use client";
 
import { CoreMessage } from "ai";
import { readStreamableValue } from "ai/rsc";
import { continueConversation } from "../actions/ai";
 
function AiToolbar() {
  const [messages, setMessages] = useState([]);
  const [input, setInput] = useState("");
 
  async function queryAi(prompt: string) {
    // Add the new prompt to the existing messages, so it remembers
    const newMessages: CoreMessage[] = [
      ...messages,
      { content: prompt, role: "user" },
    ];
 
    // Send to AI and stream in results
    const result = await continueConversation(newMessages);
 
    // Add each chunk as its received
    for await (const content of readStreamableValue(result)) {
      setMessages([
        ...newMessages,
        {
          role: "assistant",
          content: content,
        },
      ]);
    }
  }
 
  return (
    <form onSubmit={() => queryAi(input)}>
      <p>{messages?.[0].content || "No messages yet"}</p>
      <input
        type="text"
        placeholder="Enter prompt…"
        value={input}
        onChange={(e) => setInput(e.target.value)}
      />
    </form>
  );
}

Text editor

I’m using Liveblocks Text Editor in my real application. The editor is based on Lexical, and has a number of features such as real-time collaboration, comments, mentions, and notifications.

an exotic &

nutriti

Natalie

fruit to your diet

@Rachel

then dragonfruit is|

Getting started is simple, and because it’s a Lexical extension, you can extend your existing editor with collaboration.

export function Editor() {
  const initialConfig = liveblocksConfig({
    namespace: "Demo",
    onError: (error: unknown) => {
      console.error(error);
      throw error;
    },
  });
 
  return (
    <LexicalComposer initialConfig={initialConfig}>
      <RichTextPlugin contentEditable={<ContentEditable />} />
      <LiveblocksPlugin />
    </LexicalComposer>
  );
}

Integrating AI

To integrate our AI solution into the text editor, we can modify our code snippet from before, creating a button that submits a prompt to our queryAi function. In this example we’re taking the currently selected text, asking the AI to simplify it.

import { CoreMessage } from "ai";
import { readStreamableValue } from "ai/rsc";
import { continueConversation } from "../actions/ai";
import { useLexicalComposerContext } from "@lexical/react/LexicalComposerContext";
 
function AiToolbar() {
  const [editor] = useLexicalComposerContext();
  const [messages, setMessages] = useState([]);
 
  async function queryAi(prompt: string) {
    // Add the new prompt to the existing messages, so it remembers
    const newMessages: CoreMessage[] = [
      ...messages,
      { content: prompt, role: "user" },
    ];
 
    // Send to AI and stream in results
    const result = await continueConversation(newMessages);
 
    // Add each chunk as its received
    for await (const content of readStreamableValue(result)) {
      setMessages([
        ...newMessages,
        {
          role: "assistant",
          content: content,
        },
      ]);
    }
  }
 
  return (
    <div>
      <button
        onClick={() => {
          editor.update(() => {
            const selection = $getSelection();
            const textContent = selection?.getTextContent();
 
            queryAi(`
              The user is selecting this text:
 
              """
              ${textContent}
              """
 
              Simplify the text.
          `);
          });
        }}
      >
        🧹 Simplify
      </button>
    </div>
  );
}

After the button’s been pressed, messages will update with the AI result. However, we still haven’t added the message to the text editor yet. To do this, we can create a button the replaces the current selection with the last result.

import { CoreMessage } from "ai";
import { readStreamableValue } from "ai/rsc";
import { continueConversation } from "../actions/ai";
import { useLexicalComposerContext } from "@lexical/react/LexicalComposerContext";
 
function AiToolbar() {
  const [editor] = useLexicalComposerContext();
  const [messages, setMessages] = useState([]);
 
  async function queryAi(prompt: string) {
    // Add the new prompt to the existing messages, so it remembers
    const newMessages: CoreMessage[] = [
      ...messages,
      { content: prompt, role: "user" },
    ];
 
    // Send to AI and stream in results
    const result = await continueConversation(newMessages);
 
    // Add each chunk as its received
    for await (const content of readStreamableValue(result)) {
      setMessages([
        ...newMessages,
        {
          role: "assistant",
          content: content,
        },
      ]);
    }
  }
 
  return (
    <div>
      <button
        onClick={() => {
          // Replace currently selected text
          editor.update(() => {
            const selection = $getSelection();
            if (selection && messages?.[0].content) {
              selection.insertRawText(messages[0].content);
            }
          });
        }}
      >
        🔁 Replace selection
      </button>
      <button
        onClick={() => {
          editor.update(() => {
            const selection = $getSelection();
            const textContent = selection?.getTextContent();
 
            queryAi(`
              The user is selecting this text:
 
              """
              ${textContent}
              """
 
              Simplify the text.
          `);
          });
        }}
      >
        🧹 Simplify
      </button>
    </div>
  );
}

There’s still more work to be done, for example changing queryAi to add a loading spinner, but you get the idea! Liveblocks Text Editor also enables a number of other features out of the box, which are worth adding.

Being able to highlight text in your editor, and leave a comment, is a necessary feature for modern editors. Liveblocks Text Editor allows you to do this—the following snippet allows users to open a floating comment composer below their highlighted text.

import { useLexicalComposerContext } from "@lexical/react/LexicalComposerContext";
import { OPEN_FLOATING_COMPOSER_COMMAND } from "@liveblocks/react-lexical";
 
export function OpenComposer() {
  const [editor] = useLexicalComposerContext();
 
  return (
    <button
      onClick={() =>
        editor.dispatchCommand(OPEN_FLOATING_COMPOSER_COMMAND, undefined)
      }
    >
      💬 New comment
    </button>
  );
}

Selecting text in the editor, and pressing the button, will highlight the text and create an attached thread and comment. To visually render each thread next to its highlight, we can use two Liveblocks components which handle all positioning for you.

  • AnchoredThreads anchors threads vertically alongside the text—best for desktop.
  • FloatingThreads floats popover threads under highlights—ideal for mobile.
export function Editor() {
  const initialConfig = liveblocksConfig(/* ... */);
  const { threads } = useThreads();
 
  return (
    <LexicalComposer initialConfig={initialConfig}>
      <RichTextPlugin contentEditable={<ContentEditable />} />
      <LiveblocksPlugin>
        <FloatingThreads threads={threads} className="block md:hidden" />
        <AnchoredThreads threads={threads} className="hidden sm:block" />
      </LiveblocksPlugin>
    </LexicalComposer>
  );
}

These components are really polished and handle a whole host of edge cases for you.

Notifications

Similarly, we can render a list of notifications using the <InboxNotification> component. Each notification is triggered when a user mentions you in a comment, or when you’ve been mentioned inline in the editor.

import { useInboxNotifications } from "@liveblocks/react/suspense";
import { InboxNotification, InboxNotificationList } from "@liveblocks/react-ui";
 
export function CollaborativeApp() {
  const { inboxNotifications } = useInboxNotifications();
 
  return (
    <InboxNotificationList>
      {inboxNotifications.map((inboxNotification) => (
        <InboxNotification
          key={inboxNotification.id}
          inboxNotification={inboxNotification}
        />
      ))}
    </InboxNotificationList>
  );
}

You can also create fully custom notifications, and display those in your inbox too, though we don’t need this now.

Try out a live demo that contains all the features listed above, and more. It’s open-source, and free for you to use for any purpose. Make sure to follow me on Twitter if you’d like to hear about similar demos!