Ask HN: How to easily benchmark LLM without elaborate setup?
Is there a service that allows for various LLMs (I am mainly interested in completion APIs), open sourced or paid, to be called on a per token pricing (i.e no infra setup) ? Anyone looking into providing a wrapper for all LLMs essentially ? You can also find the same if you know the tokens/sec for different input and tokens variation. In case you are interested to see results for speed ( tokens/second) I Ran some tests between LLama2 7Bn, Gemma 7Bn, Mistral 7Bn to compare tokens/second on 6 different libraries with 5 different input tokens range (20 to 5000) and three different output tokens (100,200 and 500) on A100. These are the results : https://inferless.com/learn/exploring-llms-speed-benchmarks-...