Settings

Theme

Ask HN: How to benchmark different LLM models in parallel?

3 points by dhruvagga a year ago · 2 comments · 1 min read


I was trying Langflow recently for some experiments in our open source project - https://github.com/middlewarehq/middleware to build a RAG over DORA metrics.

In my machine, langflow literally brings makes it super slow so testing each model for output is painful. Is there a way I can try out a parallel output from different models to compare?

verdverm a year ago

Running models locally, on your development machine, will be slow. You need beefy GPUs to get good token/sec speeds.

Run the models in the cloud, each one on a separate machine, and then invoke them remotely. You can skip the time/cost and use various APIs from 3rd parties directly.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection