EdenSCM – A cross-platform, scalable source control management system
github.comFacebook rewrote Mercurial, while Microsoft has essentially expanded git with their own virtual file system VFSforGit [0] and a bunch of performance improvements.
And Google has Piper and Srcfs http://google-engtools.blogspot.com/2011/06/build-in-cloud-a...
I found this piece to have all the interesting details: https://cacm.acm.org/magazines/2016/7/204032-why-google-stor...
Curious why facebook went for mercurial?
They explain it here: https://engineering.fb.com/core-data/scaling-mercurial-at-fa...
The official reason is that the "internals of Git" weren't conduce to the kinds of invasive changes they needed/wanted. But I think the truth is closer to being that it was going to be too hard/slow to get those invasive changes past the Git mailing list.
Here's an example of a FB eng reaching out to the mailing list: http://git.661346.n2.nabble.com/Git-performance-results-on-a...
> The official reason is that the "internals of Git" weren't conduce to the kinds of invasive changes they needed/wanted. But I think the truth is closer to being that it was going to be too hard/slow to get those invasive changes past the Git mailing list.
Which is about the same thing, mercurial was built to be at least somewhat pluggable, so facebook could build their extensions independently, and work to get a subset of them integrated into mainline. Git is designed so it can be built on top of, but not really under or within.
Of the five largest tech companies only 1 has created a mono repo in git after Google and Facebook chose mercurial and started proving it out. I think it’s less about what the project was designed for and the communities ability to appreciate the challenges facing these large companies. You can tell from the post because all the feedback is “have you tried doing something totally different” rather than “that’s an interesting scaling challenge - how can we make git perform better at scale? what are the properties we’d have to trade off? how do we manage that tension or is this really fundamentally incompatible and if so clearly communicating why it’s incompatible with the project’s goals”.
At a technical level I believe both Google and Facebook engineers make core changes to mercurial when they need to so I don’t think that’s the root philosophical difference.
This was a super fun read, thanks for sharing.
The mailing list piece is from 2012, and describes how git is very slow on a synthetic repo with millions of files and commits. Today, my current place of work has a monorepo that’s approaching the size described in this mailing list, but git seems to be holding up just fine. If you checkout a branch that’s far enough away from master it takes a minute, but add, rebase, commit, status and blame are all negligibly impacted speed-wise. The only issue we run into is rejected non-conflicting pushes to master during peak hours, with maybe several dozens of engineers trying to merge and push master simultaneously.
Does anybody have any insight into what’s changed in git internally since 2012 to support bigger repos?
I don’t think there is one single change that made a huge difference. I follow the changelogs posted on the mailing list, and of the performance related changes, it’s often “we got 3-5% speed up on this benchmark on this fs without making things worse on others”.
Over 8 years and tens of those changes, it adds up to a significant performance improvement.
Git works nicely on Linux for Chromium with over 540K files spread across a few modules. On Mac and Windows it is kind of tolerable, but with git status taking 5 or more seconds, I started to use “git status directory” to get more instant feedback. And git blame can take more than a minute, so it is often better to look at the log and guess the changes from it.
Free RAM has a huge influence of how much of the filesystem tree is cached in kernel. This is visible from just `time find`. It could just be a case of developer workstations going from e.g. 4 GB to 16 GB.
Some responses are funny.
> Have you considered upgrading all of engineering to SSDs? 200+GB SSDs are under $400USD nowadays.
The official reason is that the "internals of Git" weren't conduce to the kinds of invasive changes they needed/wanted. But I think the truth is closer to being that it was going to be too hard/slow to get those invasive changes past the Git mailing list.
Funny, but I'm starting to wonder if there's an affect-based complement to Conway's Law.
(googler, opinions are my own)
Google interestingly also still contributes to mercurial[0], but I don't think has said why externally officially.
But yes, my understanding is that mercurial is much easier to replace parts of the flow without entirely hacking the codebase to bits. Microsoft's solution for git required them to fork the code base, which I still don't think has been fully merged with upstream yet? And as others said, the mercurial community was more open to enterprise contributing things than git can be.
BTW : with so many smart and connected people interested in source control management in one place, does anyone know what happened to veracity scm (http://veracity-scm.com)?
It was very promising but the suddenly stopped updating and then (more or less intentionally it seemed) links stopped working, but the site is still up 7 years later...
The developers went on to work on other things: https://web.archive.org/web/20130915093113/veracity-scm.com/...
ah, thanks, I tried to figure out for years because I felt they got so much right ux wise.
There is more tooling needed in general - just one recursive grep will populate the entire EdenFS.
I suppose FB has better tools. But I won’t touch this until the ecosystem is sufficient (and also, because git and hg are perfectly sufficient for the monorepo I oversee)
The file system, the VCS, and the ability to index/grep the repo are just the tip of the iceberg. Pretty much everything that needs to access the filesystem or which depends on the contents/structure of the repo in some way needs to be re-built from scratch or otherwise dramatically customized to operate in this new landscape. That means a long catalog of tools and many years of effort.
This was already starting to become true even before FB switched to Eden, and before it switched to Mercurial (grepping, for example, was already back in the Git days being served by a custom grep service).
Interesting. This might be a step in the direction of being able to store big files in repositories without hassle.
However, perhaps OS level support would be preferred. Imagine you have a type of symbolic link that is not just followed, but executed when you access it. That would be really powerful and would allow this kind of optimization. And you wouldn't even need to install or run anything.
Sounds a lot like FUSE?
Does this work together with a build cache? My dream setup for dealing with building huge code bases is a file system integrated with the version control system to only download files when they are accessed (which sounds like what this does) but also employing a build cache and module system, so it doesn't even need to download and compile any module that has not been touched, it just downloads the result from the build cache instead.
Seems like they can be separated systems. For example, Google's Bazel supports build caches.
https://docs.bazel.build/versions/master/remote-caching.html
EdenFS makes it possible to query for hashes of files so any cached outputs can be looked up from the build cache without having to download the source control. There’s more to do here, but that’s a big benefit of having a custom filesystem.
This is exactly what Google's internal system does: https://cacm.acm.org/magazines/2016/7/204032-why-google-stor...
The SCM, the CI, the Merge Request system, the build system, they're all tied together into a single thing. Google has open sourced parts of it but nothing exists as a whole unit the way Google does it
Nix has a build cache which is keyed by a hash of the build inputs, so a package will only be rebuilt if sources or dependencies change. The cache is immutable and is populated on demand from the network. https://nixos.org/nix/
> A virtual filesystem for speeding up the performance of source control checkouts.
To describe it as a filesystem matches my thinking [0]: "It already resembles a network file system, so it should provide an interface nearly as easy to use."
If it really takes off as an Open Source project, we might be able to "mount" repostories eventually.
> we might be able to "mount" repostories eventually.
If I were going to try something other than git, it would be fossil => https://fossil-scm.org/home/doc/trunk/www/index.wiki
Fossil is awesome, but (currently) would not scale to Facebook-size repos. It seems to be fine with long histories (lots of commits), but bogs down on lots of files (based on discussions surrounding a large repo considering moving to fossil).
For personal projects, it’s absolutely the bees knees. I can have multiple checkouts of the same repo, ask fossil about every single repo I have, has a really pleasant CLI, has a web-interface (which I personally hardly use anymore), manages tickets, ... sort of github in-a-box, but more pleasant.
I think the one big differentiator for this is:
"EdenSCM is not a distributed source control system. In order to support massive repositories, not all repository data is downloaded to the client system when checking out a repository"