Settings

Theme

Optimizing Top K in Postgres

paradedb.com

150 points by philippemnoel 2 months ago · 25 comments

Reader

bob1029 2 months ago

Lucene really does feel like magic sometimes. It was designed expressly to solve the top K problem at hyper scale. It's incredibly mature technology. You can go from zero to a billion documents without thinking too much about anything other than the amount of mass storage you have available.

Every time I've used Lucene I have combined it with a SQL provider. It's not necessarily about one or the other. The FTS facilities within the various SQL providers are convenient, but not as capable by comparison. I don't think mixing these into the same thing makes sense. They are two very different animals that are better joined by way of the document ids.

  • philippemnoelOP 2 months ago

    Under the hood, ParadeDB is built by integrating Tantivy (a Lucene-inspired Rust search library) inside Postgres. This is to say: I agree with you -- We're not trying to claim that Postgres itself is an alternative to Lucene, but rather that something like Lucene can be integrated inside Postgres so that you can get the power of both in a single system (or in a cluster of Postgres instances)

    • monadicmonad 2 months ago

      why do I want them in a single system?

      • inbx0 2 months ago

          - saves infra costs 
          - saves infra headaches 
          - devs only need to be experts in one system (or well I guess one and a half, probably there's something to learn about ParadeDB too, but probably less than in learning Lucine) 
          - no need to worry about keeping data up to date in the separate seach system
          - all data is available when you want to do new queries that you handn't thought of when implementing the data transfer/indexing
      • ddorian43 2 months ago

        Its easier to have 1 system until a certain scale.

jmgimeno 2 months ago

Maybe I'm wrong, but for this query:

SELECT * FROM benchmark_logs WHERE severity < 3 ORDER BY timestamp DESC LIMIT 10;

this index

CREATE INDEX ON benchmark_logs (severity, timestamp);

cannot be used as proposed: "Postgres can jump directly to the portion of the tree matching severity < 3 and then walk the timestamps in descending order to get the top K rows."

Postgres with this index can walk to a part of the tree with severity < 3, but timestamps are sorted only for the same severity.

davidelettieri 2 months ago

The "But Wait, We Need Filters Too" paragraph mentions "US" filter which is introduced only later on.

h1fra 2 months ago

Postgres is really good at a lot of things, but it's very unfortunate that it's really bad at simple analytics. I wish there was a plugin instead of having to have N databases

ajstars 2 months ago

Curious whether you benchmarked against a partial index on the sort column. For fixed-category top-K queries the planner sometimes picks it over a full index scan, though I've seen it regress on high-write tables due to index bloat. Did write volume factor into your test setup?

Vadim_samokhin 2 months ago

Just in case, there is a btree_gin extension which can be used in queries combining gin-indexable column and btree-indexable column. It doesn’t solve top-K ordering problem though.

tacone 2 months ago

The issue here is the row based format. You simply can't filter on arbitrary columns with that. Either use an external warehouse or a columnar plug-in like Timescale.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection