Settings

Theme

Ask HN: Has anyone scaled Postgres to over a billion rows?

2 points by anindha a year ago · 5 comments · 1 min read


What server did you use? How did you do it? I am trying to avoid using RDS or something similar since the costs will be very high.

phamilton a year ago

Yes. Running multiple multi-billion row Postgres DBs on AWS Aurora.

The number of rows isn't that consequential honestly. Btrees are very fast at navigating deep hierarchies. It's the volume and complexity of traffic that matters.

On AWS Aurora we can run 10 readers to handle our peak midday traffic (10M+ daily active users).

I wouldn't shy away from doing billion row DBs on-prem if that's what the finances detail. Postgres can handle it. But the new wave of scalable Postgres (alloydb, aurora, neondb) will make it easy.

faebi a year ago

Yes, openstreetmap.org has over 9 billion rows in a table and documents how they do it. See https://wiki.openstreetmap.org/wiki/Stats

t90fan a year ago

we had a 20 TB ballpark (so I would guess many tens or even hundreds of billions of rows for sure) postgres database at a place I worked in ~2015, hosted on-prem, I don't recall it causing them too much hassle, main thing I remember was the server had an very large amount of RAM (512GB which was loads and loads back then), and lots (for the time) of cores, something like 16, but was otherwise a fairly standard ~£50k ballpark piece of HP kit.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection