Settings

Theme

Show HN: We cut RAG latency ~2× by switching embedding model

myclone.is

27 points by vira28 25 days ago · 4 comments

Reader

novoreorx 25 days ago

Great article! I always feel that the choice of embedding model is quite important, but it's seldom mentioned. Most tutorials about RAG just tell you to use a common model like OpenAI's text embedding, making it seem as though it's okay to use anything else. But even though I'm somewhat aware of this, I lack the knowledge and methods to determine which model is best suited for my scenario. Can you give some suggestions on how to evaluate that? Besides, I'm wondering what you think about some open-source embedding models like embeddinggemma-300m or e5-large.

sippeangelo 25 days ago

The biggest latency improvement I saw was switching off OpenAI's API that would have a latency anywhere between 0.3 - 6 seconds(!) for the same two word search embedding...

jawnwrap 25 days ago

Cool article, but nothing groundbreaking? Obviously if you reduce your dimensionality the storage and latency decreases.. it’s less data

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection