Settings

Theme

How do run finetuned models in a multi-tenant/shared GPU setup?

1 points by iamzycon a year ago · 0 comments · 1 min read


I'm considering setting up a fine-tuning and inference platform for Llama that would allow customers to host their fine-tuned models. Would it be necessary to allocate a dedicated infrastructure for each fine-tuned model, or could a shared infrastructure work? Are there any existing solutions for this?

No comments yet.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection