Version Control for AI Coding
branching.appBaffling. Not at all clear from the site or video what this does, what problem it is solving, and what about LLM coding is different such that it needs new ideas in version control. Is it just that there are more commits and more conflicts because people are pushing more garbage without regard for consistency and stability? I would suggest solving that by pushing less garbage, or at least having fewer people pushing garbage to the same place at the same time.
How does it resolve conflicts? If you want to resolve conflicts automatically, try the excellent Mergiraf, which works by looking at the AST rather than the line-by-line diff: https://mergiraf.org/
Branching continuously synchronises your Git repository with GitHub and automatically resolves conflicts on rebase - removing manual pull/push, management of branches, and conflict resolution. The exact merge strategy is evolving - we are starting with deterministic "take incoming" and testing structural and AI-assisted options. The guiding principle: automate the routine merges, leave full control in developer's hands when it matters.
You are right that some situations do require careful inspection of changes to avoid "garbage". In others cases you might not care about internals if behaviour looks correct, e.g. for a prototype.
Our "progressive depth" approach in Branching aims to serve both cases - default automatic behaviour, and the option to do Git operations manually when you need to - including editing conflicts manually or with tools like Mergiraf. That way the busy path stays fast, and the careful path is still just plain Git.
I don't know about this product but I think we need to version control the prompts together with the code.
That won't help much unless the LLM model version remains constant, is added, and/or can be invoked again. Not impossible for locally hosted models, but nigh impossible for the ever-changing online ones.
You do! I personally organize them into Markdown playbooks, organized in a way that makes sense for the project, and which live in the repo’s .claude folder (or .ai, or whatever).
I’m confused on what this is based on the landing page.
Version control isn’t specific to A.I workflows, what does this add on top of git?
Is this a worktree type solution so you could make parallel edits?
Not sure calling the product Branching is a good idea. May cause confusion.
It also makes it impossible to find via searching.
I watched the video and I don't really understand how this maps to the underlying Git operations and what it can do. What happens if I make changes locally while Cursor is doing something? Is this detected properly? (That might be useful.) Can I use it with Claude Code too in some way? Is it primarily for syncing with external tools like Lovable?
Also, the ChatGPT generated copy for the landing page is somewhat off-putting.
We need new version control workflows or just a usability layer on top of git with the Proliferation of agentic coding. But this is not it - jm not sure what ita actually doing and its opaque.
I've been using Jujutsu for the past 6 months and I love its CLI. It uses git as the backend, and remains compatible with all our personal scripts if you add the --colocate flag. You should give it a try.
JJ is great - we actually build on it! Branching uses JJ for commit-graph storage, then layers live-sync and automatic conflict resolution on top. It started life as the VisualJJ extension for VS Code and Cursor and is now available as a standalone app as well, so any editor can benefit.
Take a look at my thoughts on version control and vibecoding
https://github.com/TZubiri/keyboard-transpositions-checker
My idea is that we should not commit LLM written code, but rather we should commit the prompts. The LLM prompts are source code, the LLM code is target code. If you use typescript and scss, you would commit that, not the generated js and css.
That LLMs are typically non-deterministic introduces some issue, but surely you can achieve determinism with seeds, specific model revisions and 0 temperature.
I have been thinking similarly but you need to store the prompt AND the source code. We are far away from deterministic LLMs. I don't even know if this makes sense at all.
I get what you mean, but the term source code is starting to get ambiguous, my argument is that the prompt is the source code, the python code I call the target code or generated code.
The generated code is stored on as a release artifact though, you can find it on the release tab on github. It's just not part of the repo, as it's not strictly source code under the Stallman definition, which has technical and legal implications.
You can't as far as I'm aware unless you control the entire batch during inference, or don't use batching which would require you to run your own inference.
"Surely"? Have you tried it?
Yes. I didn't achieve it, but got pretty consistent.
The claim is that surely we can achieve it, not now but, say, 3 months down the road, or 2 years. Good models are cloud only now, but when they get more power (by Moore's law) they will run on device. At that point (and probably way before) we will have determinism.
And don't call me Shirley.
Your example shows everything that is wrong with SWE nowadays. Can you right now generate anything without an internet connection? I guess not. Also what the fuck can I do with that repository? There is no code, nothing useful to me, no automated way to build the application, and the source code can change every time you generate it which means no specifications, no tests, no way to improve this application, and no way to create regression tests. If I ever saw that thing at work, it would go to the trash can.
Have you read the README?
You write as if I were a vibecoder, I'm not.
"There is no code, nothing useful to me,"
There is code. It's in the release tab as a build artifact, not in the diffable repo as source code.
You are expecting this to be a free tool, and to fulfil a thousand hurdles to fit
"what the fuck can I do with that repository? There is no code, nothing useful to me, no automated way to build the application, and the source code can change every time you generate it which means no specifications, no tests, no way to improve this application, and no way to create regression tests. If I ever saw that thing at work, it would go to the trash can."
I think you may be approaching it as a gratis tool that you can build consume and use, and that it should be useful to you and easy to build and fulfill 100 hurdles to reach the quality level of other OS tools like Linux, Git or whatever. This is not it, think of it more like an article with code. It's designed to be read, which I think you didn't do, the semantics of github made you expect a free useful thing.
It's an interesting idea
I was today years old when I read that version control has changed because of LLMs.
I’m just gonna keep typing ‘hg commit’ and plow ahead.
Been posted a few times recently, no indication of what changed
And no indication of what it does. Looks like a landing page to evaluate interest before working on it.