Ask HN: LLM Use/Research Hardware Tiers
It seems there are tiers of hardware required for LLM use: both interacting/asking questions and also training, but I don't understand them. There's seemingly two ends: a )it runs on my Mac or b) it needs 8xH100 Nvidia cards at USD250k+.
What are some other tiers? What could be done with 10k, 50k, 100k investments into compute? At least for use you can get pretty far with 2k ish consumer hardware. Check out r/localllama if you want to learn more in general. If you do research, you may have access to descent HPCs. But what's research? Would loading some new documents into model for training is research? If you do it in a new special way, yes for sure. If you do it to run a business, no.