Show HN: 1.32 Petaflops hardware for local prototyping
autonomous.aiI’ve been working on Vanta, a scalable AI hardware solution powered by 2–8 NVIDIA RTX 4090s, delivering up to 1.32 petaflops FP32 in a compact form factor.
It’s built for startups, developers and researchers to prototype, fine-tune and run models up to 70B parameters locally. So you can own your computer instead of renting.
- A 2-GPU setup costs $9k and breaks even in 9 months vs. cloud rental at $0.69/hr (ex: RunPod).
- The 8-GPU at $40k saves $12k in year one compared to $48k in cloud costs.
This can handle different AI framework: TensorFlow, PyTorch, ONNX, CUDA-optimized libraries, VLLM, SGLANG, llama.cpp...
I can get it built in a day and shipped out quick. Let me know what you think! I'm sure it looks better in person, but the images kind of make it look like a wicker basket. Totally superficial take, I know. I’m not really sure about this but would love to undestand how can this run multiple different workloads at once? How fast?