Part 4 of a series on AI-assisted development. Parts 1, 2, and 3 covered architectural drift, guardrails, and deterministic specs. This one’s different. This is the thing itself.
Three posts about architecture, benchmarks, and specs. Fair question: what was the 2300 file application you talked about?
NimoNova is a research workspace. You save things; articles, papers, PDFs, images and it turns that pile into a knowledge system. Not a bookmark graveyard. Not “starred for later” purgatory. A living, queryable, structured body of knowledge you can search by meaning, explore as a graph, and write from directly.
I built it over 1500 hours of evenings and weekends, almost entirely with LLM coding assistants. 2300 files. €1200 in API costs. It’s the codebase behind everything I’ve written about in this series.
Let me show you what it does. I made this clip rather quickly and is shows :)
quickly made YouTube clip showing bits of NimoNova in action
The Three Layers
NimoNova organizes knowledge in three progressively focused layers:
Library — Everything you’ve ever saved. Every bookmark, PDF, image, document. Searchable by meaning, not just keywords. Ask for “that article about how sleep affects memory” and find it, even if you never tagged it that way.
Collections — Themed groups of resources. Create them manually, or define rules and let them populate automatically. This is where things get interesting — each collection unlocks intelligence features I’ll get to in a moment.
Projects — Where knowledge becomes output. Link your collections for research. Keep a project journal. Write documents that can cite any resource with a keystroke. Collaborate with your team.
Library stores. Collections understand. Projects apply.
Capture Without Friction
The best knowledge system is one you actually use. So capture had to be effortless.
Browser extension: click the icon, title and URL are pre-filled, pick a collection, save. Three seconds. There’s a context menu for saving links or selected text, and a sidebar mode for batch research sessions where you’re saving 20 things in a row.
On mobile, it’s the share sheet. Find something in any app, tap share, tap NimoNova.
When you save, NimoNova asks one extra question: why did you save this? Not to be nosy — because intent helps you find things later. Seven intents: Reference, Example, Learning, Inspiration, Tool, Read Later, Archive. Searching for “design system” while filtering by “Example” surfaces only the things you saved specifically to imitate. Small metadata, big payoff.
Press enter or click to view image in full size
AI That Works Quietly
Here’s where building 2300 files with AI assistants taught me something about AI in the product.
NimoNova’s AI doesn’t interrupt. It doesn’t ask you to confirm every action. It operates on a confidence spectrum:
Every accept, dismiss, and correction trains the system. It starts conservative with new collections, mostly suggestions, minimal automation. By the time you have 50+ items, it’s calibrated to your patterns.
And if you don’t want any of it? Settings let you turn off auto-filing, tag suggestions, collection suggestions, individually or all at once.
Collection Intelligence: Seven Lenses
Press enter or click to view image in full size
A collection of 50 bookmarks is just a list. Collection Intelligence transforms it into something you can interrogate. Seven lenses, each answering a different question:
Graph — How do concepts connect? Your resources become a network visualization. Nodes are resources and entities (people, companies, concepts). Edges show relationships: causal, structural, contradictory, temporal. Click a “contradicts” edge and see exactly where two sources disagree. Trace the path between two concepts across dozens of articles.
Chronicle — What happened when? AI extracts every date, event, and milestone from your content and lays them on a navigable timeline. Zoom from year view to week view. Spot clusters — four product launches in three weeks? That’s a pattern worth investigating.
Facts — What structured data is hiding in your content? Prices, ratings, specifications, metrics — extracted into a queryable grid. Compare five products side by side. Sort by rating. Spot anomalies. The AI even suggests values for gaps based on patterns across your collection.
Ask — Natural language queries against your collection. “Which articles mention both React and performance?” Answer in two seconds with links ready to click. No query language to learn.
Knowledge Gaps — What are you missing? This one’s subtle and powerful. The AI analyzes your collection and identifies blind spots: topics you haven’t covered, perspectives you’re missing, areas where your research has holes. One user discovered their 18-month accessibility collection had nothing on cognitive disabilities — a blind spot they hadn’t noticed.
Bias Detection — Bias Detection analyzes the source diversity within your collection or project, identifying over-representation of specific domains, authors, or viewpoints to surface potential blind spots in your research.
For You — Collection-specific suggestions. Resources from your library that might belong here. Merge suggestions when collections overlap. Quality alerts when content goes stale.
Seven lenses, one goal: turn a list of links into a living knowledge base.
Press enter or click to view image in full size
Projects: Where Knowledge Becomes Work
Collections organize. Projects do.
A project is a living workspace with four connected elements:
Linked Collections provide the research. A product launch project might link Competitive Analysis, User Research, Marketing Materials, and Technical Specs. All those resources become searchable within the project, without copying anything.
The Journal is where you can take notes, tasks and look back, and it’s more than a task list. Five entry types: Tasks, Notes, Decisions, Milestones, and Events. Slash commands (/task Fix the login bug) create entries inline from a document. Smart sections () Today, Upcoming, Future, Pipeline, Chronicle ) organize by time automatically.
The Decision entries are the ones I find most valuable. Title: what was decided. Rationale: why. Alternatives: what was considered. Evidence: linked resources. Six months later, when someone asks “why aren’t we using MongoDB?”, the answer is one click away.
Documents: Research-Powered Writing
Most tools separate research and writing. You collect in one app, write in another, and manually copy links back and forth. NimoNova brings them together.
The document editor has four modes — Research, Draft, Focus, and Split Panel — and the one you’ll live in during early drafting is Research mode. It splits your screen: research panel on the left, editor on the right.
Press enter or click to view image in full size
The research panel has smart tabs: All shows everything in scope. Cited shows what you’ve already referenced. Uncitedshows relevant research you haven’t used yet — a reminder of sources you might forget. Has Facts surfaces resources with extracted structured data. Conflicts flags sources that contradict each other. Each resource card shows relationship badges — supports or contradicts other sources — so you can see tensions in your research at a glance.
The AI Writing Assistant
This is where it gets interesting. NimoNova doesn’t just help you find research while writing; it helps you write from it.
Press enter or click to view image in full size
Insert from Context is the core feature. Trigger it via slash command, give it a natural language prompt; “Compare the pricing models across my sources” or “Summarize the key findings” and choose your desired length: a sentence, a paragraph, or a custom word count. The AI loads your entire project or collection context (facts, resources, relationships, timeline entries) and generates prose with inline citations referencing your actual saved sources. Before it touches your document, you get a preview where you can regenerate, adjust length, review detected conflicts between sources, and inspect every citation. Then insert with one click.
Contextual Completion works inline as you write. Trigger it mid-sentence and it searches your project’s research semantically, then streams a completion that continues your thought; citing relevant sources as it goes.
Citation Suggestions work passively. Select any passage longer than about 20 characters, and the system automatically surfaces up to three semantically relevant bookmarks from your research. No searching required.
Writing Prompts appear in the research panel, generated from your actual research. They’re categorized by intent: conflict prompts help you address contradictions between sources, comparison prompts suggest contrasting similar resources, integration prompts encourage incorporating uncited material, and analysis prompts propose thematic exploration. Click one and Insert from Context opens with it pre-filled.
Contextual Ask lets you ask questions about your entire body of research and get AI-generated answers with citations and confidence levels; without leaving the editor.
The citation system supports four types: source citations (a bookmark or document), fact citations (a specific extracted data point), relationship citations (a connection between two resources), and timeline citations (project decisions, milestones, tasks). They appear as clickable inline cards with hover context.
The result: writing that’s connected to research from the first sentence. No tab-switching. No lost references. No “where did I read that?”
The Architecture Behind It
If you’ve read the earlier posts, you might be curious how Collection Intelligence — seven different analysis modes, each with AI extraction, graph rendering, timeline visualization, and natural language querying — stays maintainable across 2300 files.
This is exactly the kind of feature where architectural drift compounds. Each lens has a backend pipeline (extraction, embedding, analysis), a data layer (graphs, timelines, structured facts), and a frontend (interactive visualizations, query interfaces). Multiply by seven. That’s a lot of surface area for an LLM to get creative in the wrong way.
ArchCodex kept each lens following the same patterns: consistent mutation conventions, shared permission checks, proper event-driven communication between modules. When I asked an LLM to add the Knowledge Gaps feature, it didn’t invent its own permission system, because the constraints surfaced the existing one.
The Journal’s five entry types were a similar challenge. Five archetypes with different fields, different statuses, different visual treatments — but sharing infrastructure for linked resources, comments, reactions, and activity logging. The SpecCodex work from Part 3 was born directly from getting those entry types right.
What’s Next
NimoNova is a side project built in evenings and weekends. It’s not launched yet. But it’s real, it works, and it’s the testbed where ArchCodex and SpecCodex were born.
The series so far:
- Part 1: LLMs write code that works but doesn’t fit. ArchCodex is the jig.
- Part 2: The research behind guardrails for coding agents.
- Part 3: Specs that make features deterministic, not emergent.
- Part 4: The thing itself. You’re here.
If you just got here: start with Part 1. It’ll make the whole thing make sense.