Settings

Theme

Pg_shard – PostgreSQL extension to scale out real-time reads and writes

github.com

216 points by besquared 11 years ago · 27 comments

Reader

bjt 11 years ago

At work we're in the midst of rolling out a sharded Postgres platform based on http://www.craigkerstiens.com/2012/11/30/sharding-your-datab..., with the sharding implemented at the application level. The biggest piece of complexity in that post is around designing the sharding in such a way that you can gracefully add more shards later.

Having read the pg_shard readme, it's not clear to me how it addresses that issue. I'd need to have a really clear idea how to handle scaling my cluster before committing to a sharding solution.

  • ozgune 11 years ago

    We'll update our FAQ with a detailed answer to this question.

    As a summary, the user specifies the shard and replication count as they are sharding their table. For example, if you have 4 nodes, you may pick 256 as the initial shard count. That way, you'll have ample room to grow as you add new nodes to your cluster.

    When you pick 256 shards over 4 worker nodes, pg_shard will create them in a round-robin fashion. Shard #1 will go to node A and B, shard #2 will go to node B and C, and so forth. This has the advantage that when one of the worker nodes fail, the remaining 3 nodes evenly take the additional work. Also, when you add a new node to the cluster, you can gradually rebalance some of the shards by moving them to the new node.

    • scanr 11 years ago

      We have used sharding for managing time series data e.g. 1 shard per day. Is there a way this could work i.e. where the number of shards continually grows?

      • jasonmp85 11 years ago

        That use case isn't yet handled by pg_shard: the plugin currently supports only hash partitioning and what you've described is range partitioning. This is certainly on our immediate feature list, as range and hash partitioning cover a variety of use cases.

        However, CitusDB does support range partitioning and has a \stage command that will create new shards from incoming data. If you periodically load data that corresponds to a particular time range (hour, day, week), CitusDB can easily handle creation of additional shards during each load.

  • Spoutingshite 11 years ago

    In our experience, sharding at the application level is to be avoided.

    We implemented application level sharding a few years ago when we had 1Tb of data. We now have over 20Tb of data and application level business logic is killing us.

    Our pain is not DB speed, but application complexity. All of our applications (backup, web, maintenance, archiving etc) all need the business logic to traverse the sharding set.

    We are planning to move to DB level sharding in January (as part of a larger application and infrastructure refresh).

rdl 11 years ago

It's great to see this released as open source (LGPL) -- looks like a really useful extension.

I've always been a fan of using PostgreSQL wherever possible, and this extends "where possible". As a YC S11 batchmate especially, I'm really proud of all the great contributions Citus Data has made, and what a useful tool CitusDB is (a bunch of other YC companies use it).

Cieplak 11 years ago

Huge fan of Citus Data. Their column store for postgres is really useful for building data warehouses:

https://github.com/citusdata/cstore_fdw

georgyo 11 years ago

I wonder how it compares to the slightly more well established Postgres-XL.

There is definitely a different approach being taken as Postgres-XL has a supervisor/loadbalancer, and pg_shard seems like every node is capable of doing all actions.

Excited to see it evolve.

deedubaya 11 years ago

Having a master node which delegates all the queries sounds like a single point of failure. Could this be avoided by having a failover master?

  • jasonmp85 11 years ago

    Hey, Jason from Citus Data here (one of pg_shard's authors).

    Yes: if you have high availability needs, PostgreSQL's streaming replication feature can mitigate this failure risk by providing a hot standby for your master; otherwise, regular backups of the master can suffice (pg_shard's metadata is stored in regular PostgreSQL tables).

    See our documentation page for more answers to questions like this: http://citusdata.com/docs/pg-shard

    • gunnark01 11 years ago

      We have something similar in house, it is basically proxy that runs on our app-server and our app connects to. It will then parse incoming queries and route queries to the correct schema/server base on the WHERE clause (also supports INSERT). We would like to on open source it, but right now it specific to our needs (e.g hard coded hash function).

napperjabber 11 years ago

I don't see any documentation around schema migrations. How is that normally solved on the Petabyte level?

rachbelaid 11 years ago

If you want more details about pg_shard, have a look to this blog post.

http://www.databasesoup.com/2014/12/whats-this-pgshard-thing...

It explains a bit more what it does and doesn't

sgrove 11 years ago

Comparing this a bit with Postgres-XL, how mindful of data locality do I have to be when querying? Looks like this drops right in for existing apps, but I'd be concerned about long-term performance if I didn't tailor my app code.

  • ozgune 11 years ago

    You don't need to worry much about data locality with pg_shard. It's designed to push the computation to the data, and as such, abstracts away most data locality concerns. The only exceptions are complex analytic workloads such as distributed joins; and CitusDB offers an easy upgrade path to remove those concerns.

    Another difference with Postgres-XL is that pg_shard is an extension to PostgreSQL, rather than a separate product with different components. As such, you can just compile and load it into standard Postgres, and you're ready to go.

devmach 11 years ago

Nice work. But I wonder how they handle table alterations, I couldn't see they mentioned on the docs. Is it possible at all? If it is, since pg_shard doesn't support transactions, what if alteration fails ?

  • jasonmp85 11 years ago

    Table alterations are not supported: if we're missing this in our docs I'll be sure to make that more explicit. They're certainly in the list of things we'd like to support.

    At the moment, our customers who do need table alterations accomplish them by using a script that propagates such changes to all worker nodes.

Xorlev 11 years ago

Wondering if any CitusData folks can speak to whether this is being used for production workloads anywhere yet. If so, could you speak to the size/throughput of those deployments?

EGreg 11 years ago

We have a sharding mechanism in our PHP framework and also Node as well, which can actually split shards which become too "hot" as determined by you. The whole system is online during splitting and only a small part if the system goes offline for 1 second before the final switchiver. No need to pre shard in the beginning, it actually splits according to usage later, into an unlimited number of shards.

cetra3 11 years ago

Awesome work, I should give this a spin.

One of the issues I can see already is being able to support existing applications, especially ones that have transaction heavy workflows. I have the same issue with Postgres XC, supporting transactions, but not supporting savepoints.

But this looks like a completely different use case for postgres, as a sort of pseudo-noSQL type db.

karavelov 11 years ago

This is another evidence of how extensible and flexible is PostgreSQL. Great work!

bvanvugt 11 years ago

Very cool to see this open sourced.

Any advice on migration process? Transferring a high write-throughput postgres instance to a multi-pg deployment with pg_shard feels pretty daunting.

arthursilva 11 years ago

Gotta love how they opensource bits of their product.

  • mutagen 11 years ago

    And at a glance there's a clear upgrade path to their commercial product, hopefully adding to a sustainable business model.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection