Show HN: I got tired of syncing Claude/Gemini/AGENTS.md and .cursorrules
I use Claude, Codex, Cursor, and Gemini on different projects. Each one has its' own md file in its own format. CLAUDE.md, AGENTS.md, .cursorrules, GEMINI.md. Four files saying roughly the same thing, four chances to get out of sync!
I kept forgetting to update one, then wondering why Cursor was hallucinating my project structure while Claude had it right.
So I built an MCP server that reads a single YAML file (project.faf) and generates all
four formats. 7 bundled parsers handle the differences between them. You edit one file, and bi-sync keeps everything current.
It's an MCP server, so Claude Desktop can use it directly. 61 tools, 351 tests, no CLI dependency.
Try it: npx claude-faf-mcp
Source: https://github.com/Wolfe-Jam/claude-faf-mcp
The .faf format itself is IANA-registered (application/vnd.faf+yaml).
Curious if others are dealing with this multi-AI config problem, or if there's a simpler approach I'm not seeing. I ran into a related problem. Before syncing my context files I wanted to know which ones are even worth keeping.
Turns out half my .cursor/rules were vague stuff like "follow best practices" that just wasted tokens. And my CLAUDE.md overlapped with AGENTS.md on half the instructions.
So I built a small tool that measures the token cost per file and lints for conflicts/dead weight: https://github.com/ofershap/ai-context-kit
Run `npx ai-context-kit measure` on any project and you get a per-file token breakdown. Helped me cut my context budget by ~40% without losing anything useful.
this is the exact problem I kept running into. I'm working on tanagram.ai - it learns your architectural patterns and codebase logic, then auto-detects violations specific to your codebase. so no more editing config files and hoping they get read. it also runs as an agent skill so claude/cursor generate better code before you even review it.
use git with a repo, like so many do for their dotfiles
if your agent cannot be populated from there as well, you are using the wrong framework / setup
Totally — git handles syncing files. The problem is these four files have different formats and conventions. Same project context, four dialects. That's why I wrote bi-sync --all: one YAML source, four native outputs.
that's not my experience, LLMs are flexible enough, ln -s is sufficient
ln -s makes all four files identical. Whichever format you write it in, the other three get the wrong structure. This generates each in its native format.
show me good evals that it actually makes a difference
that is the opposite of what I see
ETH Zurich tested this: LLM-generated prose context = -3% performance, +20% cost. Even human-written = +4% at +19% cost. The problem is prose bloat. Structured formats avoid that by design. https://arxiv.org/abs/2602.11988
There is research that shows the opposite. A literature survey will show you something different than a single paper.
ETH Zurich studied 5,694 PRs across 12 diverse repos.
a