Settings

Theme

Don't Buy These GPU's for Local AI Inference

aiflux.substack.com

4 points by ericdotlee 3 months ago · 1 comment

Reader

ericdotleeOP 3 months ago

With the recent release of Qwen-3 Omni I've decided to put together my first local machine. As much as I just want to pick up a beelink and flash it with Omarchy I think I want a bit more horsepower.

However, the internet seems littered with "clever" loca ai monstrosities that gang together 4-6 ancient nVidia GPU's (priced today to seem like overpriced e-waste) to get lackluster performance from piles of nVidia m60's and P100's? In 2025 this kind of seems like a waste or just bad advice to use hardware this old?

Curious if this find seems like a good source of info regarding staying away from Intel and AMD GPU's for local inference? Might do some training but right now more interested in light RAG and maybe some local coding.

Hoping to build something before the holiday season to keep my office warm with GPU's :).

Thanks!

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection