Settings

Theme

LLMs as Retrieval and Recommendation Engines

medium.com

3 points by pongogogo 6 months ago · 2 comments

Reader

taintech 6 months ago

This is a cool idea, but the cost is a killer. Normally, you would run recommendations and pre-cache them for the users or items they are associated with. Running a giant LLM for every user's recommendation is thousands of times more expensive and slower than current methods. It just doesn't seem practical for a large number of users.

  • pongogogoOP 6 months ago

    The post mentions an approach of using a large model to generate labels and then distilling this into a smaller model to lower cost (though it doesn't provide an example)

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection