Settings

Theme

Ask HN: How are LLM's served at scale?

1 points by KuriousCat a year ago · 2 comments · 1 min read

Reader

How do LLM servers handle contextual data? Is the context passed on a prefix to a stateless machine? (Would mean a lot of tokens have to reprocessed during a session) Or a separate LLM instance is created and maintained for an active session? (Expensive and inefficient)

neximo64 a year ago

the session is tied to a gpu cluster. It would actually be quite inefficient to switch gpu cluster to another one mid session, but its needed in a failure scenario

ansonhw a year ago

good batching and tensor parallelization prob

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection