pandō IDE for agents
Your agent edits text. Let it refactor code.
add tracing to async handlers
The Limitation
Today's tools are fast, flexible, and great at finding text.
But code isn't text.
It has structure, syntax, relationships, and meaning.
Text tools see none of it — so agents loop, re-check, and hope it compiles.
The Category
The transactional layer for code. Agents are probabilistic; Pando is the deterministic system underneath.
Imagine a 1,000-page novel. Your protagonist is George.
Halfway through, you decide he's now Camellia.
You need to update every reference — and change nothing else.
George → Camellia
The Guarantee
Structure, not Strings.
This gives you
Correct Syntax
Code generated by the agent is syntactically valid (compiler-checked before commit)
Deterministic AST
Transformations validate against the Abstract Syntax Tree before commit.
# Rename UserService → AuthService
rename UserService → AuthService
├ indexing symbol graph...
├ 47 references across 12 files
├ running compiler preflight...
└ preflight passed
done 0 syntax errors · 0 collateral changes
▌
This gives you
Semantic Undo
Every edit gets a snapshot. Rollback from anything.
Atomic Transactions
Edits apply atomically. Either the whole graph updates, or nothing does.
# Atomic rename with rollback
rename db → database
├ snapshot S_20260223_091541 created
├ updated 23 true references
├ skipped 4 string literals (intentional)
└ skipped 2 comments (intentional)
applied atomically rollback available
▌
This gives you
10–100×
Text tools require LLM verification for every match. pandō edits the reference graph directly. No uploads, no inference.
Local Execution
pandō talks to your local disk, not the LLM. Most searches take milliseconds — just a database query.
# Rename across large monorepo
rename processPayment → handlePayment
├ symbol lookup 0.3ms (local index)
├ references found 785 across 171 files
├ network calls 0
└ applying changes...
done in 14.2s all local
text-tool equivalent: ~25 min (785 LLM round-trips)
▌
This gives you
Token Savings
Achieves >100× token compression for some operations (e.g. rename costs the same for one or a thousand references)
O(n) → O(1) Cost
Token cost is O(n) (n = references) using text tools (for some ops). pandō is O(1): just the operation params + symbol name.
# Token cost: rename across 64 files
text tools (search & replace via LLM):
files sent to model 64
avg tokens / file ~800
total ~51,200 tokens O(n)
pandō:
tokens used ~40 O(1) — same for 1 or 1,000 refs
1,280× fewer tokens cost scales with intent, not codebase
▌
This gives you
Reduced Exposure
Some operations send no code at all to the LLM — just the intent of the transform.
Scalable Codemods
Operations like FMR require O(1) input, generate O(1) output, and modify arbitrary code volume.
# FMR: add structured logging
fmr Logger.info(*) → Logger.info(*, {structured: true})
├ scanning AST nodes...
├ 312 matches across 89 files
├ bytes sent to model 0
└ applying changes...
done in 2.1s 312 replacements · 0 bytes exposed
▌
Supported: Clojure · TypeScript · JavaScript · Python · Java · Dart · C# · C/C++ Expanding to: Rust, Go, Perl
LLMs treat code as text.
Pandō lets them grok it.
You wouldn’t code in Notepad. Why is your agent?
Free for personal use · Team? Let’s talk