Settings

Theme

InferenceMAX: Open-Source Inference Benchmarking

newsletter.semianalysis.com

4 points by pella 2 months ago · 2 comments

Reader

pellaOP 2 months ago

- https://github.com/InferenceMAX/InferenceMAX

- "NVIDIA GB200 NVL72, AMD MI355X, Throughput Token per GPU, Latency Tok/s/user, Perf per Dollar, Cost per Million Tokes, Tokens per Provisioned Megawatt, DeepSeek R1 670B, GPTOSS 120B, Llama3 70B"

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection