Settings

Theme

Show HN: LLMem, a read through cache for OpenAI chat completions

llmem.com

1 points by c0g 2 years ago · 0 comments · 1 min read

Reader

When building a system around OAI, I found myself sending the same request multiple times as part of developing/testing some other part of the system. On top of wasting money in this way, I was also throwing away potentially useful later training data to specialize a smaller LLM for my use case.

I’m hosting an open server atm since I hit it from various different networks for my projects, or you easily enough run it as a local service.

No comments yet.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection