Show HN: Tangents – Non-linear LLM chat with hands-on context control
tangents.chatLLM chat breaks when you're learning. Side questions either clutter the main chat, or you copy/paste into a new chat and lose the context.
Tangents makes the workflow explicit. You can branch off any message to chase a rabbit hole, then merge the findings back without polluting the main thread.
Key Features:
- Branching: Select text in any message to fork a new chat.
- The Collector: Highlight snippets across different branches to build a "shopping cart" of context.
- Compose: Send a prompt using only what is in your Collector (explicit context control).
- Console: Inspect exactly what context is being sent to the LLM before you hit enter.
How it's different:It is not a node-graph canvas. It keeps the linear chat UI but allows inline branching. It is not an agent framework; it is a tool for humans who want manual control over the LLM's context window.
-----
View it here: https://tangents.chat/hn
Note: Currently supports OpenAI (BYOK supported). Quick clarification: the `/hn` page is a no-login, no-API-key interactive demo (autoplay is just the default walkthrough; you can pause and click around). OpenAI BYOK is only for the full app when you want real model calls. More detail on what’s different under the hood:
Feedback I'd love: does Branch + Collector + Compose feel faster than "open a second chat window + copy/paste", or does it feel like extra steps? - Branches are anchored to a source message + selected span (not freeform threads).
- Collector items are references back to those spans, so "Compose" can build a prompt from explicit citations rather than chat history drift.
- "Context compiler" shows the exact prompt stack + token budget, and lets you exclude/pin items to control what survives truncation.