Settings

Theme

Ask HN: Personalize LLM by fine-tuning it regularly with conversation history?

1 points by veganmosfet a year ago · 0 comments · 1 min read

Reader

Assuming it runs locally, would it be possible to fine-tune an open source CoT LLM (e.g. deepseek r1) regularly / incrementally based on conversation history? Is it still possible with distilled models?

Background (please correct if I am wrong): computational inference costs go up (linearly to quadratically) with context size. Therefore, it's costly to accumulate information in the context. By fine-tuning the model regularly it will "learn" and adapt to the user or to a specific task. One could e.g. fine-tune overnight using unused computational power. Thanks.

No comments yet.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection