Testing Sync at Dropbox
dropbox.techhey all, author of the previous post (https://news.ycombinator.com/item?id=22595782) here, and I'm happy to answer any questions about the system! I think the author isaac is here too.
What are some of the trickiest edge cases that you try to account for? I can only assume that this kind of system is enormously complicated, given the high expectations from users and the complexity of the underlying environments.
here's some of the greatest hits from memory:
1) when we upload a file, we first ask the server which blocks we need to upload. this allows us to deduplicate blocks with other files or previous revisions of files you have. I had written some code like...
the bug here is that we're computing the offset into the file after filtering, so we'd be uploading incorrect contents to the server. we have protections elsewhere that would prevent file corruption, but finding this bug pre-commit with trinity was pretty awesome.let blocks_needed: HashSet<BlockHash> = ...; let file_blocks: Vec<BlockHash> = ...; let to_upload: Vec<_> = file_blocks .into_iter() .filter(|b| blocks_needed.contains(b)) .enumerate(); for (offset, block_hash) in to_upload { ... }2) nontrivial interactions between different components in the system are hard to test manually and come for free with trinity. trinity will simulate cancelling file transfers, crashing the system, and even dropping the local databases (to simulate disk corruption). these are all hard enough to test in isolation, but knowing how to combine them is almost impossible. so, having "automated test generation" for these cases is really useful.
3) the "theory of trees" seems like it'd be pretty simple, but canopycheck has found some really interesting cases, especially around moves. when we started, we hadn't really thought about how to handle move cycles, concurrent moves that cross ancestor chains, ...
Whenever I start Dropbox (boot my laptop) it takes ~30min of maxed out single-core work to "catch-up" since the "last-update". It's just really annoying as it makes my laptop quite loud.
Why is that? Google Drive doesn't seem to need to struggle that much.
So... that’s weird. I have a Macbook Air - not the most powerful machine on the planet - and a nearly full 1 TB dropbox with ~3 million files. Initial resync on boot takes under a minute.
How much data (total size/count of files) do you have?
According to windows: 371790 files and 15.6GB.
hey, can you file a bug from the app? it will collect a report that we can use to diagnose. there's a "Report Bug" option adjacent to "Preferences..."
I'll do that.
if you can email me (my HN username @dropbox.com) the email address associated with your account once you've sent the bug, I can take a look!
Thanks for sharing your experience on this rewrite. I have a few questions:
1. I'm a bit surprised that you don't persist the scenarios of the tests after a failure, only its seed. Does it mean that when you want to replay it, you have to redo the minimization phase? Or, do you have a way to find a seed for the minimized scenario?
2. Do you have some tests where you generate a set of operations and play them twice: one time with the mocks and one time on the real servers to check that they have the same results?
3. The article says "Note also the importance of the commit hash, as another type of “test input” alongside the seed: if the code changes, the course of execution may change too!". How are you ensuring that a commit really fixes a bug, and not just change the execution path to a happy path where the conditions of the bug are not met? By playing again a lot of tests, or do you write a new unit test that exhibit the bug to ensure the reproductability?
4. Do you think we can say that CanopyCheck is applying randomized testing at the unit tests level and Trinity is applying it at the integration tests level?
I'm not Sujay, but I worked on this system once upon a time too! (not anymore)
1. It does redo the minimization phase, but the actual execution is extremely fast, so this cost is minimal. Storing test outputs gets pretty expensive when you are running millions of tests, and since there are very few failures, recomputing this is worthwhile
2. Yes and no! the article talks about this a little, but the "heirloom" system does essentially this, and the "native" filesystem variant of Trinity runs the same Trinity tester code against a real filesystem. The "no" is due to the issues with randomized testing -- since any operation that you do can affect the RNG, the exact operation that is run for a particular seed can change if you swap any part of the system. For regression tests, the operation sequence can be put into a separate, non-randomized test.
3. both
4. Testing in Nucleus is a sliding scale from "unit-test-like" to "integration-test-like" -- Trinity is mocking plenty of functionality; CanopyCheck is simultaneously testing many different components. It would probably be more accurate to say that CanopyCheck is testing a smaller subset of components with much greater fidelity, and Trinity is testing as much of the sync engine is practical.
Thanks for the answers, I appreciate that.
If you have time, I have another round of questions:
1. Did you try formal methods like TLA+ on the client? I think that the logic covered by CanopyCheck may be a nice target.
2. Do you have some tests with several clients running at the same time on a shared directory? In particular, I think of the termination invariant where the clients are fighting because several users have reorganized the directory by moving a lot of stuff, and each client is trying to converge in a different direction (ie they are making operations that cancelled the ones made by other clients).
3. The article says "In the Nucleus data model, nodes are represented by a unique identifier". Does it happen that a node has to change its identifier? For example, in a scenario like this one:
(ada) $ offline (grace) $ offline (grace) $ mv ~/Dropbox/shared/TODO.txt ~/Dropbox/private-grace/ (ada) $ mv ~/Dropbox/shared/TODO.txt ~/Dropbox/private-ada/ (ada) $ echo 'foo' >> ~/Dropbox/private-ada/TODO.txt (grace) $ echo 'bar' >> ~/Dropbox/private-grace/TODO.txt (ada) $ online (grace) $ online1. We thought about this at one point, IIRC, and various parts of Dropbox sync have been formally modeled. But CanopyCheck and Trinity are useful in part because they test the real, production code -- the hash-indexing bug that Sujay mentioned elsewhere on this page is an implementation error, not a design error.
2. Kind of. There are tests for interactions between instances (in particular, between different Dropbox folders on the same machine), but running Dropbox twice on the same folder is explicitly not supported.
3. Yes, this happens. There's logic to handle these changes, since some applications expect to use in-place edits and others swap in a new file -- you don't actually need grace to have an interesting situation. You can experiment with this yourself in a Dropbox folder :)
No question, just wanted to say that these blog posts are fantastic! Thank you for writing them.
How smart is the sync engine's "merge" system? Can it cleverly merge files, or if it sees a single file that has diverged, does it just create a conflict copy and let the user sort it out?
Like, if both Alice and Bob has changed somefile.txt, but Alice has added a few lines in the beginning, and Bob a few lines at the end (i.e. something that would "git merge" cleanly), will the sync engine merge that into one file?
I'm very curious about what thoughts you guys had on this issue, because I can see arguments for both sides.
yeah, we don't merge file contents. it's really hard to do this well, since you'd need to implement something like operational transforms for every different file type, and the cost of getting wrong is really high, since we'd be corrupting users' files.
I can see us perhaps doing this in the future for more targeted file types where the "rebase" behavior is well-defined, though.
I don't think Dropbox does merge at all does it?
The file object model wouldn't work for merging Word documents and such
Thanks for sharing! I've worked on similar applications - it's always cool to see other successful architectures.
Interesting, I learned this technique of having a third, synchronization intermediary with offlineimap and this helpful blog post: http://blog.ezyang.com/2012/08/how-offlineimap-works/
Thanks for sharing. This is indeed a very helpful blog post
> To cover this layer of our codebase, we also run Trinity in a “native” mode, targeting the platform’s actual filesystem. However, running against the native filesystem incurs a huge performance penalty (roughly 10x), which in turn means Trinity Native can’t test as many different seeds.
What platforms do you test in “native” mode? What hardware backs it?
we just use our regular CI infrastructure for running linux, macOS, and windows. we have infrastructure for managing VMs for our different supported platforms, setting up filesystems, and so on.
here's a talk from one of our engineers on our macOS CI infrastructure: https://blog.macstadium.com/blog/virtualizing-mac-infrastruc...
I'm sure Dropbox is first-class behind the scenes, but the desktop app has become simply awful with time, so that I almost feel sorry for them and embarrassed towards other people when I use Dropbox.
I find the capitalization of the title here on HN to be rather misleading.
I thought Dropbox was actually testing Sync (https://www.sync.com/) in-house and doing a public comparison.