Is anyone else drowning in terminal tabs running AI coding agents?
I work on a 300k line monorepo, just me and my co-founder. At any given moment I have 3-6 CLI agents (Claude Code, Codex, Aider) running simultaneously across git worktrees. The throughput is great. Managing it is not.
Every tool I found was either another agent, an IDE plugin, an abstraction over my CLIs, or doesn't understand worktrees. Conductor is Mac-only and getting buggier. Warp and Ghostty were interesting but not opinionated enough for my worktree-to-PR workflow.
So I built Pane (https://runpane.com). Keyboard-driven desktop app that gives you one interface to monitor and control CLI agents across worktrees. Ships with a command palette, simple shortcuts (ctrl + up or down arrow to switch between each worktree, VS code shortcuts for other basic things). Each worktree gets a run button that auto-generates a script (via Claude Code on first run) to spin up on isolated ports, so I can have every branch hot reloading in its own tab.
Been using it daily since last week. Hard to go back. I fully open sourced it here, so you can ship your own features to Pane, using Pane: https://github.com/Dcouple-Inc/Pane
How are others handling multi-agent workflows? The 300k line monorepo with 3-6 CLI agents running simultaneously across git worktrees is an interesting production deployment of multi-agent coding. Most discussions are about single-agent workflows; the coordination problems at multi-agent scale are different and less explored. The key insight in your description is the worktree-per-agent isolation pattern. Without that isolation, agents would interfere with each other's changes unpredictably. With it, you get parallel throughput but the coordination overhead shifts to you — when do worktrees get merged? How do you handle when two agents modify overlapping code? The Agile Vibe Coding Manifesto's concerns about accountability and traceability become especially important at multi-agent scale. "Every change has traceable intent" is harder to maintain when six agents are generating changes in parallel. The manifesto's argument would be that the investment in architectural constraints and explicit context for each agent is what prevents the throughput gains from creating an unmaintainable diff explosion. Curious what your PR/merge workflow looks like — that seems like where the real complexity lives: https://agilevibecoding.org the way i handle it in pane: every pane is one worktree, and the diff viewer + keyboard shortcuts (squash, rebase, merge) are right there before you ever push. so the review step isn't a context switch -- it's just... the next thing in the same window the harder problem you're pointing at is agent-to-agent coordination. i don't try to solve that at the orchestration layer. i keep it simple: one agent per feature, and they don't touch each other's worktrees. when feature A needs something from feature B, i merge feature B first. boring, but it doesn't blow up the ADR / architectural governance point from agilevibecoding.org is interesting though -- have you seen teams actually ship something like that in practice? or is it mostly theoretical at this scale? How much are you spending before you even see a $1 of revenue? Nice tool, but the agentic workflow doesn’t sound cost efficient. Fair enough, I spend a maximum of 200 a month given I use the MAX plan from CC. I don't find myself every hitting the weekly limits - but recently I've gotten close! Seconding the "Gee, must be nice to be able to set money on fire" sentiment. Personally, haven't quite converged on a config where I can seem to get the tooling successfully talking to my locally hosted models, and running, albeit slowly. Yet. Damn well not using other people's money to subsidize some cloud provider's build out. So no, I suppose I don't have your problem. I also don't know if I'll ultimately stick with the tooling. It's so damn different from how I'm used to dealing with things, I'm not entirely sure that I'll ever use most LLM's as anything but boilerplate generators, summarizers, or delvers into the latent space of humanity to bring back breadcrumbs, and validate understanding/analyze/cross-reference, etc. For everything else, there's grep. Fair enough - with that said - be mindful of the times. I am pretty deeply involved in Seattle's startup scenes and a notable startup recently laid off 35% of its engineers, and may lay off more, specifically because the engineering team was hesitant to adopt AI. In another case, a Ex-IPOd Seattle founder just hired two of my 21–22-year-old friends who are very AI native - they use Pane every day for their dev. The world is changing. Aye, understood. Not like I'm not trying it. I'm also being mindful of it's limitations though. GIGO applies, and we're going to be making a lot more garbage a lot faster. Bullshit Asymmetry Principle and all