Settings

Theme

Marmot – A distributed SQLite server with MySQL wire compatible interface

github.com

169 points by zX41ZdbW 18 hours ago · 37 comments

Reader

_a9 16 hours ago

I used this a while back while running a waybackmachine style site for a large social media platform. I wanted to keep it simple with sqlite but when it got popular it started to become a problem. Marmot was the only thing that I was able to get to work with the amount of data I was pulling in. It would sync the master db from the main archiver server to all the ha servers so the user would be able to access it immediately no matter what ha server they got. The dev team was nice to talk to when I had some issues in setting it up.

It was definitively a weird backend setup I had made but it just worked once set up so I didnt have to touch any of the frontend code.

maxpert 16 hours ago

Author here! Every time I post my own stuff here it seems to sink, so hopefully this actually reaches some of you.

Marmot started as a sidecar project using triggers and polling to replicate changes over NATS. It worked, but I hit a wall pretty fast. Most people really want full ACID compliance and DDL replication across the cluster. I realized the only clean way to do that was to expose SQLite over a standard protocol.

While projects like rqlite use REST and others go the page-capture route, I decided to implement the MySQL protocol instead. It just makes the most sense for compatibility.

I’ve reached a point where it works with WordPress, which theoretically covers a huge chunk of the web. There are scripts in the repo to deploy a WP cluster running on top of Marmot. Any DB change replicates across the whole cluster, so you can finally scale WordPress out properly.

On the performance side, I’m seeing about 6K-7K inserts per second on my local machine with a 3-node quorum. It supports unix-sockets, and you can even have your processes read the SQLite DB file directly while routing writes through the MySQL interface. This gives you a lot of flexibility for read-heavy apps.

I know the "AI slop" label gets thrown around a lot lately, but I’ve been running this in production consistently. It’s taken a massive amount of manual hours to get the behavior exactly where it needs to be.

  • hardwaresofton 14 hours ago

    Just want to note that every time I see it I’m impressed with the project, great job so far.

    The fact that you’ve been running this with WP is also a really huge use case/demonstration of trust in your different software — IMO this should be on the README prominently.

    These days I personally just ignore projects that insist on MySQL — Postgres has won in my mind and is the better choice. The only way I’d run something like a WP hosting service is with a tool like Marmot.

    One thing you might find interesting is trying marmot with something like Litestream v2 — marmot of course has its own replication system but I like the idea of having a backup system writing to s3. It seems trivial (as you’ve noted that you can still work directly on the s3 file) but would be a nice blog post/experiment to see “worked out” so to speak.(and probably wouldn't sink to the bottom of hn!)

    • maxpert 7 hours ago

      Marmot already supports debezium, so you can do way more than just basic S3 backups. I've noted your suggestions, it's definitely helpful.

      • hardwaresofton 6 hours ago

        Thanks for the consideration! The reason something like litestream is interesting to me is that it’s (now[0]) an off the shelf way to do PITR backups for SQLite.

        Sure, I could piece together or write something myself to catch the CDC stream or run another replica, but simply running one more process on one of the boxes and having peace of mind that there’s an S3 backup continuously written is quite nice.

        I thought debezium was mostly for moving around CDC records, not a backup tool per say. I.e. if I were to write debezium records to object storage with their connectors it’s my job to get a recent dump and replay?

        [0]: https://fly.io/blog/litestream-v050-is-here/

        • maxpert 2 hours ago

          I see your point. Yes the Debezium path requires more configuration and ochestration, litestream makes it very simple. Would be more than happy to provide this out of box in Marmot if enough users request it (Feel free to open ticket).

    • raphinou 9 hours ago

      Also a postgres user. Wondering why MySQL wire protocol and not pgsql's: did the mysql choice have advantages compared to pgsql in this case?

      • maxpert 7 hours ago

        You point out a question that I spent months thinking about. I personally love Postgres, heck I initially even had a version that will talk postgres wire but with SQLite only syntax. But then somebody pointed me out my WordPress demo, and it was obvious to me that I have to support MySQL protocol, it's just a protocol. Underlaying technology will stay independent from what I choose.

      • wgjordan 3 hours ago

        Related, Corrosion has experimental support for the pgsql wire protocol (limited to sqlite-flavored SQL queries): https://superfly.github.io/corrosion/api/pg.html

  • spiffytech 10 hours ago

    Since Marmot pivoted to the MySQL wire protocol, I haven't had a clear picture of its advantages over using normal MySQL with active-active replication. Can you speak to that?

    • maxpert 7 hours ago

      Here are some that I can think on top of my head:

      - Marmot let's you choose consistency level (ONE/QUORUM/FULL) vs MySQL's serializable.

      - MySQL requires careful setup of replication, conflict avoidance and monitoring. Fencing split brain and failover is manual in many cases. Marmot even right now is easier to spin up, plus it's leaderless. So you can actually just have your client talk to different nodes (maybe in round robin fashion) to do load distribution.

      - Marmot's eventual consistency + anti-entropy will recover brain-splits with you requiring to do anything. MySQL active active requires manual ops.

      - Marmot's designed for read-heavy on the edge scenarios. Once I've completed the read-only replica system you can literally bring up or down lambda nodes with Marmot running as sidecar. With replicas being able to select DBs they want (WIP) you should be able to bring up region/org/scenario specific servers with their light weight copies, and writes will be proxied to main server. Applications are virtually unlimited. Since you can directly read SQLite database, think many small vector databases distributed to edge, or regional configurations, or catalogs.

  • dangoodmanUT 2 hours ago

    This seems CDC based, does that mean it handles `now()` and other non-deterministic functions properly?

geenat 17 hours ago

Oh man, tons of updates including DDL replication! V2 looks very impressive.

Now I'm curious how sharding/routing is handled- which seems like the final piece of the puzzle for scaling writes.

  • maxpert 16 hours ago

    Right now I've started off with full replication of every database in cluster. On my roadmap I have:

    - Ability to launch a replica on selected databases from main cluster.

    - Ability for replica to only download and replicate changes of select databases (right now all or nothing).

    - Ability for replica to proxy DML & DDL into write cluster transparently.

    - Custom set of commands for replicas to download and attach/detach databases on the fly.

    This will instantly solve 80% of the problems for most of consumption today. I will probably go after on demand page stream and other stuff once core features are done.

    Not to mention this solves majority use-cases of lambdas. One can have a persistent main cluster, and then lambda's coming up or going down on demand transparently.

revengeduck 5 hours ago

This looks cool! Curious, how does this compare to https://github.com/superfly/corrosion?

  • maxpert 4 hours ago

    Yes explored that path too with CRDTs.

    - DDL gets really tricky in these cases, that's why you see Corrosion has this weird file based system. - cr-sqlite ain't maintained anymore but I did some benchmarks and if I remember correctly it was as slow as 4x-8x depending upon type of your data & load. Storage bloats by 2x-3x, tombstones accumulate pretty fast as well.

    I mean each mutation on every column looks something like:

    table, pk, cid, val, col_version, db_version, site_id, cl, seq

    Overall I dropped the idea after spending month or two on it.

    • wgjordan 3 hours ago

      Very helpful hearing about your own similar experiments with CRDTs. As a followup I'd be interested in more direct comparison between Marmot and Corrosion in terms of features/performance, since they both serve a similar use case and Corrosion seems to have worked through some of the CRDT issues you mentioned.

      • maxpert 2 hours ago

        Ok it's a very long discussion but I will try to keep it brief here (more than happy to chat on Marmot Discord if you wanna go deeper). Honestly I've not done head to head comparison, but if you are asking for guestimated comparison:

        - Marmot can give you better easy DDL and better replication guarantees.

        - You can control the guarantees around transactions. So if you're doing a quorum based transaction, you are guaranteed that quorum has written those set of rows before returning success. This takes care of those conflicting ID based rows getting overwritten that people would usually ignore. And you should be able to do transactions with proper begin and commit statements.

        - Disk write amplification is way lower than what you would see in CRDT. This should usually mean that on a commodity hardware you should see better write throughput. As I mentioned on my local benchmarks I'm getting close to 6K insert ops. This was with a cluster of three nodes. So you can effectively multiply it by three and that is like 18k operations per second. I did not set up a full cluster to actually benchmark these. That requires investing more money and time. And I would be honestly frugal over here since I am spending all my $$$ on my AI bill.

        - Reads as you can see, you can read directly from the SQLite database. So you are only bottlenecked by your disk speed. There are no fancy mergers that happen on CRDT level in the middle. It's written once and you're ready to read.

        - The hardest part in my opinion that I faced was the auto increment IDs. It is a sad reality but turns out 99% of small to mid-size companies, are using the auto increment for IDs. In all CRDTs, in case of conflict, the LWW (based on one ID or another) happens, and I can guarantee you at some point in time without coordination, if nodes are just emitting those regular incrementing IDs, THEY WILL OVERWRITE each other. That was the exact problem in the first version of Marmot.

        - SQLite is single writer database. cr-sqlite writes these delta CRDT rows in a table as well, under high write load you are putting too much pressure on WAL, how do I know? I did this in Marmot v0.x and even v2 started with that and eventually I decided to write logs in a SQLite database as well. Turns out at a high throughput even writing or dumping those logs that change logs that I'm gonna discard away is a bad idea. I eventually move to PebbleDB, with mimalloc based unmanaged memory allocator for serialization/deserialization (yes even that caused slowdowns due to GC). It doesn't stop here each row in CRDT entry is for one every column of table (changed column) + it has index for faster lookup. So there that will bog it down further on many many rows. For context I have tested Marmot on gigs of data not megs.

        I do have couple of ideas on how I can really exploit the CRDT stuff, but I don't think I need it right now. I think most of stuff can be taken care of if I can build and MVCC layer on top.

joelthelion 11 hours ago

Naïve question : why would you want to use this over, say postgre?

  • jockm 3 hours ago

    You might not, it depends on your use case. However SQLite is very small and lightweight, and amazing for read heavy databases. Using SQLite lets you bypass a lot of setup and configuration; then adding something like marmot lets you add being distributed after the fact.

savolai 12 hours ago

So would it make sense to use to use this on a nas to keep wordpress sites’ content backed up?

  • maxpert 7 hours ago

    Yes you can. I know wordpress has been trying to get SQLite as first class citizen too. But at this point I am tired of waiting for that pipe-dream. But this will let you do it today. Plus Marmot supports debezium so you can have continuous stream to your NAS.

ChocolateGod 11 hours ago

I wonder if you could tie this with Litestream to get streamed backups.

wg0 13 hours ago

Is there something similar that exists for Postgres?

schulm 10 hours ago

Is this suitable for LocalFirst apps?

iliesaya 15 hours ago

is this an alternative to SymmetricDS to replicate database on multiple node without master/slave ?

eduction 6 hours ago

Let’s not forget that keeping wildlife, uh, an amphibious rodent, for uh, domestic, within the city-- that ain't legal either.

PunchyHamster 11 hours ago

weird choice considering SQLite is more similar to PostgreSQL

  • itslennysfault 2 hours ago

    OP mentioned using this to scale WordPress instances in one of the comments. So I assume that had something to do with the choice. It probably wouldn't be TOO hard to support multiple dialects of SQL in the future though.

  • phplovesong 11 hours ago

    AI probably generated the mysql thing, and OP just went with it.

oblio 16 hours ago

Funny, I was just reading about this:

https://github.com/synopse/mORMot2

FreePascal ORM, so in an adjacent space (ORM, can work with both SQLite and MySQL).

I guess DB devs really love marmots? :-))

bawolff 12 hours ago

> Marmot uses gossip protocol (no leader)

So umm, does that mean it sacrifices consistency?

naveed125 16 hours ago

That looks pretty cool

phplovesong 11 hours ago

Looks like yet another AI generated project.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection