Ask HN: How to build GPU compute marketplace?
Is it possible? Let's say Alice has 2 GPUs idle at the moment, Ben has 1 GPU, and Chris needs 3 GPUs for the next 12 hours. How to build such a system, and what problems there might occur? How to handle turning off one's machine? Does it even make sense to run training or inference on such (latency)? There is already some like this including vast dot ai. Yeah, but how does it work exactly? Dropbox is rsync with UI, Instagram is a blog with pictures, but this? How to access one’s compute? Overriding GPU drivers? It's just a remote docker context on a machine with the nvidia driver and container runtime already installed that you rent by the hour. More task specific compute pools (like Stable Horde) make lots of sense where applicable.