We played with a version of the protocol that did try to stub out the entire file-system, the idea was that you could then run the agent on a different machine than the codebase.
The problem is twofold:
- a lot of the magic of agents is in the tools they have, and the prompts that are configured; and if you're stubbing out the filesystem, you're also required to rewrite all the tools.
- performance is a challenge; it's actually really nice that the agent has access to the filesystem (sandboxing concerns notwithstanding) so it can run ripgrep, instead of having to load each file over the protocol.
So, although it introduces some race conditions, the pragmatic solution is to ignore them. Proxying reads and edits through is important because the agent is typically editing files you're also editing, and conflicts are hard to resolve if you're going via file system writes. Searching is much less sensitive to the stale context problem, and if your search results are missing a file that you've just edited, you can always tell the model where to look.
We also used to notify the Zed native agent whenever files changed; but this had the unexpected downside of sending your model potentially a lot of confusing and irrelevant data.
In summary, a general purpose filesystem sharing API would look very different than something that is focused on making agents work well. There are probably areas we can make it better, but we'd love to see actual user experience reports of the problems caused by the current approach in real use.