Show HN: ColBERT Build from Sentence Transformers
github.comAnecdote: neural-cherche seems useful as I have analysts creating positive & negative feedback data (basically thumbs-up/down signals) that we will use to fine tune retrieval models.
Assuming not much effort is required to make this work for similar models? (i.e. BGE)
Nice, it might already be compatible with BGE, I'll try it and add it to the documentation soon
Looks cool. A couple of questions: 1. Does it support fine tuning with different losses? For example, where you don't need to provide negatives and it uses the other examples in the batch as negatives 2. Can you share inference speed info? I know that Colbert should be slow since it creates many embeddings per passage
Hi, there is a single loss right now, but I plan to add some Sentence Transformers losses. ColBERT is slow as a retriever, but is quite efficient as a Ranker on GPU (way faster than cross-encoder). I plan to release pre-trained checkpoints on HuggingFace with benchmarks using BEIRand inference speed info.
Do you mean it's faster when the embeddings are pre-computed or is it faster when the embeddings are computed on the fly as well. Also, what's the recommended way to store the colbert embeddings as, because of the 2d nature of the embeddings it's not practical to store in a vector database.
Yes, ColBERT is fast because you can pre-compute most embeddings. It's important to compute documents embeddings only once. neural-cherche do not compute embeddings on the fly and the retrieve method ask for queries and documents embeddings rather than queries and documents texts.
Documents and queries embeddings can be obtained using .encode_documents and .encode_queries methods
I save most of my embeddings (python dictionnary with documents id as key and embeddings as values) using joblib in a Bucket in the cloud. I don't really know if it's a good pratice but it does scale fine to few millions documents for offline (no real-time) applications.
What sort of high level user facing feature could you build with this?
You could recommend content based on user query, tag content produced by the user, use colbert as part of a ChatBot to show evidences to the user questions
I like the inclusion of both positive and negative examples!
Do you have advice for how to measure the quality of the finetuning beyond seeing the loss drop?
In the documentation there is an evaluation module with detailed informations. The idea is to gather relevant pairs of queries and documents that are not part of the training set. Then the idea is to measure, using various metrics, how your model can retrieve accurate documents.
Do you need to have the same number of positive and negatives? Is there any meaning of pairing a positive an a negative in the triplet?
It's because of the loss of the model. I ask the model to produce a higher similarity between the query and the positive document rather than between the query and the negative document. I'll add more losses soon so there are more choices
is the loss the usual lambdarank?
Is a negative document one that doesn't match the query?
Yes exactly
Does that help much in terms of training?
It's a well-established technique for learning a similarity function: https://en.m.wikipedia.org/wiki/Triplet_loss
Yes, this is called triplet loss and has made embeddings much better.