Settings

Theme

Show HN: ColBERT Build from Sentence Transformers

github.com

66 points by raphaelty 2 years ago · 18 comments

Reader

ramoz 2 years ago

Anecdote: neural-cherche seems useful as I have analysts creating positive & negative feedback data (basically thumbs-up/down signals) that we will use to fine tune retrieval models.

Assuming not much effort is required to make this work for similar models? (i.e. BGE)

  • raphaeltyOP 2 years ago

    Nice, it might already be compatible with BGE, I'll try it and add it to the documentation soon

tinyhouse 2 years ago

Looks cool. A couple of questions: 1. Does it support fine tuning with different losses? For example, where you don't need to provide negatives and it uses the other examples in the batch as negatives 2. Can you share inference speed info? I know that Colbert should be slow since it creates many embeddings per passage

  • raphaeltyOP 2 years ago

    Hi, there is a single loss right now, but I plan to add some Sentence Transformers losses. ColBERT is slow as a retriever, but is quite efficient as a Ranker on GPU (way faster than cross-encoder). I plan to release pre-trained checkpoints on HuggingFace with benchmarks using BEIRand inference speed info.

    • aashu_dwivedi 2 years ago

      Do you mean it's faster when the embeddings are pre-computed or is it faster when the embeddings are computed on the fly as well. Also, what's the recommended way to store the colbert embeddings as, because of the 2d nature of the embeddings it's not practical to store in a vector database.

      • raphaeltyOP 2 years ago

        Yes, ColBERT is fast because you can pre-compute most embeddings. It's important to compute documents embeddings only once. neural-cherche do not compute embeddings on the fly and the retrieve method ask for queries and documents embeddings rather than queries and documents texts.

        Documents and queries embeddings can be obtained using .encode_documents and .encode_queries methods

        I save most of my embeddings (python dictionnary with documents id as key and embeddings as values) using joblib in a Bucket in the cloud. I don't really know if it's a good pratice but it does scale fine to few millions documents for offline (no real-time) applications.

kamranjon 2 years ago

What sort of high level user facing feature could you build with this?

  • raphaeltyOP 2 years ago

    You could recommend content based on user query, tag content produced by the user, use colbert as part of a ChatBot to show evidences to the user questions

espadrine 2 years ago

I like the inclusion of both positive and negative examples!

Do you have advice for how to measure the quality of the finetuning beyond seeing the loss drop?

  • raphaeltyOP 2 years ago

    In the documentation there is an evaluation module with detailed informations. The idea is to gather relevant pairs of queries and documents that are not part of the training set. Then the idea is to measure, using various metrics, how your model can retrieve accurate documents.

barefeg 2 years ago

Do you need to have the same number of positive and negatives? Is there any meaning of pairing a positive an a negative in the triplet?

  • raphaeltyOP 2 years ago

    It's because of the loss of the model. I ask the model to produce a higher similarity between the query and the positive document rather than between the query and the negative document. I'll add more losses soon so there are more choices

vorticalbox 2 years ago

Is a negative document one that doesn't match the query?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection