Press enter or click to view image in full size
As research engineers, our reading lists are always exciting… and also way too long to finish. While we’d love to go through every research paper at a leisurely, note-taking pace, the reality of the work week means that we end up skimming the abstract, scrolling quickly through sections and diagrams, and, more often than not, missing interesting points along the way.
An obvious solution is to ask AI to explain each research paper. But when your goal is to enhance your reading experience, the results can be less than satisfying:
- AI summaries tend to paraphrase or over-explain (when an excerpt would be way more efficient)
- AI references are difficult to trace back to their location in the source material (if you’re looking to fact check or learn more)
- AI chat conversations happen separately, which adds cognitive load as you toggle between your chat window and the original text (for instance, if you’re trying to remember what part of the content your question is related to)
As we reflected on these pain points, we realized the key problem: current AI workflows focus on generated content and occasionally use the source material as backup.
We wanted the reverse: a reading experience that showcases the actual source and uses AI-generated notes as the supplement.
So we decided to build it.
Our prototype, Lumi, is an interface for arXiv papers that incorporates AI as a lightweight layer on top of the source material.
1. Lumi directly highlights the main points of the paper so you can read exactly what the authors wrote. Generated summaries are extremely short, collapsible, and shown in the side margin so that it’s quick to defer to the original content
Press enter or click to view image in full size
2. When clicked, in-paper references scroll directly to the specific sentence and phrase and are highlighted for easy visual distinction. We also added one-line section summaries to the table of contents to help users quickly jump to relevant parts of the document
Press enter or click to view image in full size
3. For Q&A, you can select any text or image and ask for a custom explanation; the AI-generated answer is then tied to that specific selection. This keeps you grounded in the paper without having to jump to a separate chat window or remember the context behind your questions
Press enter or click to view image in full size
When reading in Lumi, the source material is nicely formatted and immediately visible; from there, you can quickly hone in on the AI-generated annotations and Q&A options. This feels much more transparent and controllable than a separate chat window, where the featured content acts as an alternate, competing version of the original paper.
Press enter or click to view image in full size
We see enormous potential in AI-augmented interfaces that prioritize primary sources and help readers clearly distinguish between AI assistance and original material. While the current version of Lumi only visualizes research papers, we’d love to see something like this for other long-form content. Imagine using Lumi to actually review a product’s terms of service, remember the obscure characters in a novel, or spend less time on board game instructions and more time playing.
In the meantime, we’d love for you to try reading an arXiv paper with Lumi, check out the source code, and share any feedback or suggestions.
Acknowledgments
Lumi was designed and built by Ellen Jiang, Vivian Tsai, and Nada Hussein. Special thanks to Andy Coenen, James Wexler, Tianchang He, Mahima Pushkarna, Michael Xieyang Liu, Alejandra Molina, Aaron Donsbach, Martin Wattenberg, Fernanda Viégas, Michael Terry, and Lucas Dixon for making this experiment possible!