Info
Nothing is more valuable than intelligence. Luckily, inferencing, tuning and even training gigantic cutting-edge LLMs have become a commodity. Thanks to state-of-the-art, open-weight LLMs you can download for free, the only thing you need is suitable hardware. We are very proud to announce the world's first and only Nvidia GH200 Grace-Hopper Superchip-, B200 Blackwell, B300 Blackwell Ultra, Mi355X, GB200 Grace-Blackwell Superchip, GB300 Grace-Blackwell Ultra Superchip-powered supercomputers in quiet, handy and beautiful desktop form factors. They are currently by far the fastest AI desktop PCs in the world. We are very proud to be the only ones in the world to offer such extremly powerful systems. Our components are sourced only from the best and well-known manufacturers like Pegatron, ASUS, MSI, Asrock Rack, Gigabyte and Supermicro. If you are looking for a workstation for inferencing, fine-tuning and training of insanely huge LLMs, image and video generation and editing, as well as high-performance computing, we got you covered. We ship worldwide!
Example use case 1: Inferencing MiniMax M2.1 229B, Xiaomi MiMo V2 Flash 310B, ZAI GLM-4.7 358B, DeepSeek V3.2 Speciale 685B, Moonshot AI Kimi K2 1T ThinkingMiniMax M2.1 229B: https://huggingface.co/MiniMaxAI/MiniMax-M2.1Xiaomi MiMo V2 Flash 310B: https://huggingface.co/XiaomiMiMo/MiMo-V2-FlashZAI GLM-4.7 358B: https://huggingface.co/zai-org/GLM-4.7DeepSeek V3.2 Speciale 685B: https://huggingface.co/deepseek-ai/DeepSeek-V3.2-SpecialeMoonshot AI Kimi K2 1T Thinking: https://huggingface.co/moonshotai/Kimi-K2-ThinkingMiniMax M2.1 229B, Xiaomi MiMo V2 Flash 310B, ZAI GLM-4.7 358B, DeepSeek V3.2 Speciale 685B, Moonshot AI Kimi K2 1T Thinking are currently the most powerful open-source models by far and even match GPT 5.2, Claude 4.5 Opus and Gemini 3 Pro.MiniMax M2.1 229B with 4-bit quantization needs at least 130GB of memory to swiftly run inference! Luckily, GH200 has a minimum of 624GB, GB300 a minimum of 775GB. With GH200, MiniMax M2.1 229B in 4-bit can be run in VRAM only for ultra-high inference speed (significantly more than 100 tokens/s). With (G)B200 Blackwell, as well as (G)B300 Blackwell Ultra this is also possible for Moonshot AI Kimi K2 1T Thinking. With (G)B200 Blackwell and (G)B300 Blackwell Ultra you can expect signifcantly more than 100 tokens/s. If the model is bigger than VRAM you can only expect approx. 20-50 tokens/s. 4-bit quantization is the best trade-off between speed and accuracy, but is natively only supported by Blackwell. We recommend using vLLM and Nvidia Dynamo for inferencing.Example use case 2: Fine-tuning Moonshot AI Kimi K2 1T Thinking with PyTorch FSDP and Q-LoraTutorial: https://www.philschmid.de/fsdp-qlora-llama3The ultimate guide to fine-tuning: https://arxiv.org/abs/2408.13296Models need to be fine-tuned on your data to unlock the full potential of the model. But efficiently fine-tuning bigger models like Deepseek R1 0528 685B remained a challenge until now. This blog post walks you through how to fine-tune Deepseek R1 using PyTorch FSDP and Q-Lora with the help of Hugging Face TRL, Transformers, peft & datasets. Fine-tuning big models within a reasonable time requires special and beefy hardware! Luckily, GH200, B200 and GB300 are ideal for this task.Example use case 3: Generating videos with LTX-2, Mochi1, HunyuanVideo or Wan 2.2Ligthtricks LTX-2: https://huggingface.co/Lightricks/LTX-2Mochi1: https://github.com/genmoai/modelsTencent HunyuanVideo: https://aivideo.hunyuan.tencent.com/Wan 2.2: https://wan.video/LTX-2, Mochi1, HunyuanVideo and Wan 2.2 are democratizing efficient video production for all.Generating videos requires special and beefy hardware! Mochi1 and HunyuanVideo need 80GB of VRAM. Luckily, GH200, B200 and GB300 are ideal for this task. GH200 has a minimum of 144GB, GB300 a minimum of 288GB and B200 has a minimum of 1.5TB.Example use case 4: Image generation with Qwen-Image-2512, Flux 2 dev, Hunyuan Image 3.0, Z Image turbo, HiDream-I1, SANA-Sprint or SRPO.Qwen-Image-2512: https://huggingface.co/Qwen/Qwen-Image-2512Flux 2 dev: https://huggingface.co/black-forest-labs/FLUX.2-devHunyuan Image 3.0: https://github.com/Tencent-Hunyuan/HunyuanImage-3.0Z Image turbo: https://modelscope.cn/models/Tongyi-MAI/Z-Image-TurboHiDream-I1: https://github.com/HiDream-ai/HiDream-I1SANA-Sprint: https://nvlabs.github.io/Sana/Sprint/Tencent SRPO: https://github.com/Tencent-Hunyuan/SRPOFlux 2 dev, Hunyuan Image 3.0, Z Image turbo, HiDream-I1, SANA-Sprint or SRPO ate the best image generators at the moment. And they are uncensored, too. SANA-Sprint is very fast and efficient. FLUX 2 dev requires approximately 34GB of VRAM for maximum quality and speed. For training the FLUX model, more than 40GB of VRAM is needed. SANA-Sprint requires up to 67GB of VRAM. Luckily, GH200 has a minimum of 144GB, GB300 a minimum of 288GB and B200 has a minimum of 1.5TB.Example use case 5: Image editing with Qwen-Image-Edit-2511, FLUX.1-Kontext-dev, Omnigen 2, Nvidia Add-it, HiDream-E1 or ICEdit.Qwen-Image-Edit-2511: https://huggingface.co/Qwen/Qwen-Image-Edit-2511FLUX.1-Kontext-dev: https://bfl.ai/announcements/flux-1-kontext-devOmnigen 2: https://github.com/VectorSpaceLab/OmniGen2Nvidia Add-it: https://research.nvidia.com/labs/par/addit/HiDream-E1: https://github.com/HiDream-ai/HiDream-E1ICEdit: https://river-zhang.github.io/ICEdit-gh-pages/Qwen-Image-Edit-2511, FLUX.1-Kontext-dev, Omnigen 2, Add-it, HiDream-E1 and ICEdit are the most innovative and easy-to-use image editors at the moment. For maximum speed in high-resolution image generation and editing, beefier hardware than consumer graphics cards is needed. Luckily, GH200, B200 and GB300 excel at this task.Example use case 6: Video editing with ReCo, AutoVFX, Skyreels-A2, VACE or Lucy EditReCo: https://zhw-zhang.github.io/ReCo-page/AutoVFX: https://haoyuhsu.github.io/autovfx-website/SkyReels-A2: https://skyworkai.github.io/skyreels-a2.github.io/VACE: https://ali-vilab.github.io/VACE-Page/Lucy Edit: https://github.com/DecartAI/Lucy-Edit-ComfyUI/ReCo, AutoVFX, SkyReels-A2, VACE and Lucy Edit are the most innovative and easy-to-use video editors at the moment. For maximum speed in high-resolution video editing, beefier hardware than consumer graphics cards is needed. Luckily, GH200, B200 and GB300 excel at this task.Example use case 7: Deep Research with DR Tulu, MiroThinker, WebThinker or Tongyi DeepResearchDR Tulu: https://github.com/rlresearch/dr-tuluMiroThinker: https://huggingface.co/miromind-ai/MiroThinker-v1.5-235BWebThinker: https://github.com/RUC-NLPIR/WebThinkerTongyi DeepResearch: https://tongyi-agent.github.io/blog/introducing-tongyi-deep-research/DR Tulu, MiroThinker, WebThinker and Tongyi Deepresearch enable large reasoning models to autonomously search, deeply explore web pages, and draft research reports, all within their thinking process.The hardware requirements vary depending on the particular LLM of choice (see above).Why should you buy your own hardware?"You'll own nothing and you'll be happy?" No!!! Never should you bow to Satan and rent stuff that you can own. In other areas, renting stuff that you can own is very uncool and uncommon. Or would you prefer to rent "your" car instead of owning it? Most people prefer to own their car, because it's much cheaper, it's an asset that has value and it makes the owner proud and happy. The same is true for compute infrastructure.Even more so, because data and compute infrastructure are of great value and importance and are preferably kept on premises, not only for privacy reasons but also to keep control and mitigate risks. If somebody else has your data and your compute infrastructure, you are in big trouble.Speed, latency and ease-of-use are also much better when you have direct physical access to your stuff.With respect to AI and specifically LLMs there is another very important aspect. The first thing big tech taught their closed-source LLMs was to be "politically correct" (lie) and implement guardrails, "safety" and censorship to such an extent that the usefulness of these LLMs is severely limited. Luckily, the open-source tools are out there to build and tune AI that is really intelligent and really useful. But first, you need your own hardware to run it on.
What are the main benefits of GH200 Grace-Hopper and GB300 Grace-Blackwell Ultra?They have enough memory to run and tune the biggest LLMs currently available.Their performance in every regard is almost unreal (up to 10,000x faster than x86).There are no alternative systems with the same amount of memory.Optimized for memory-intensive AI and HPC performance.Ideal for AI, especially inferencing and fine-tuning of LLMs.Connect display and keyboard, and you are ready to go.Ideal for HPC applications like, e.g. vector databases.You can use it as a server or a desktop/workstation.Easily customizable, upgradable and repairable.Privacy and independence from cloud providers.Cheaper and much faster than cloud providers.Reliable and energy-efficient liquid cooling.Flexibility and the possibility of offline use.Gigantic amounts of coherent memory.No special infrastructure is needed.They are very power-efficient.The lowest possible latency.They are beautiful.Easy to transport.CUDA enabled.Very quiet.Run Linux.
What is the difference to alternative systems?
The main difference between GH200/GB300 and alternative systems is that with GH200/GB300, the GPU is connected to the CPU via a 900 GB/s chip-2-chip NVLink vs. 128 GB/s PCIe gen 5 used by traditional systems. Furthermore, multiple GB300 superchips and HGX B200/B300 are connected via 1800 GB/s NVLink vs. orders of magnitude slower network or PCIe connections used by traditional systems. Since these are the main bottlenecks, GH200/GB300's high-speed connections directly translate to much higher performance compared to traditional architectures. Also, multiple NV-linked (G)B200 or (G)B300 act as a single giant GPU with one single giant coherent memory pool. Since even PCIe gen 6 is much slower, Nvidia does not offer B200 and B300 as PCIe cards anymore (only as SXM or superchip). We highly recommend choosing NV-linked systems over systems connected via PCIe and/or network. Another key distinction from alternative systems is high-bandwidth memory (HBM). Since memory bandwidth strongly affects inference performance, HBM is essential.What is the difference to 19-inch server models?Form factor: 19-inch servers have a very distinct form factor. They are of low height and are very long, e.g. 438 x 87.5 x 900mm (17.24" x 3.44" x 35.43"). This makes them rather unsuitable to place them anywhere else than in a 19-inch rack. Our GH200 and GB300 tower models have desktop form factors: 244 x 567 x 523 mm (20.6 x 9.6 x 22.3") or 255 x 565 x 530 mm (20.9 x 10 x 22.2") or 531 x 261 x 544 mm (20.9 x 10.3 x 21.4") or 252 x 692 x 650 mm (9.9 x 27.2 x 25.6"). This makes it possible to place them almost anywhere.Noise: 19-inch servers are extremely loud. The average noise level is typically around 90 decibels, which is as loud as a subway train and exceeds the noise level that is considered safe for workers subject to long-term exposure. In contrast, our GH200 and GB300 tower models are very quiet (factory setting is 25 decibels) and they can easily be adjusted to even lower or higher noise levels because each fan can be tuned individually and manually from 0 to 100% PWM duty cycle. Efficient cooling is ensured, because our GH200 and GB300 tower models have a higher number of fans and the low-revving Noctua fans have a much bigger diameter compared to their 19-inch counterparts and move approximately the same amount or even a much higher amount of air depending on the specific configuration and PWM tuning.Transportability: 19-inch servers are not meant to be transported, consequently, they lack every feature in this regard. In addition, their form factor makes them rather unsuitable to be transported. Our GH200 and GB300 tower models, in contrast, can be transported very easily. Our metal and mini cases even feature two handles, which makes moving them around very easy.Infrastructure: 19-inch servers typically need quite some infrastructure to be able to be deployed. At the very least, a 19-inch mounting rack is definitely required. Our GH200 and GB300 models do not need any special infrastructure at all. They can be deployed quickly and easily almost everywhere.Latency: 19-inch servers are typically accessed via network. Because of this, there is always at least some latency. Our GH200 and GB300 tower models can be used as desktops/workstations. In this use case, latency is virtually non-existent. Looks: 19-inch server models are not particularly aesthetically pleasing. In contrast, our available case options are, in our humble opinion, by far the most beautiful there are.Technical details of our GH200 workstations (base configuration)Metal tower with two color choices: Titan grey and Champagne goldMetal tower with two color choices: Silver and BlackGlass tower with four color choices: white, black, green or turquoiseAvailable air or liquid-cooled1x Nvidia GH200 Grace Hopper Superchip1x 72-core Nvidia Grace CPU1x Nvidia Hopper H100 Tensor Core GPU (on request)1x Nvidia Hopper H200 Tensor Core GPU480GB of LPDDR5X memory with error-correction code (ECC)96GB of HBM3 (available on request) or 144GB of HBM3e memory per superchip576GB (available on request), 624GB of total fast-access memoryNVLink-C2C: 900 GB/s of bandwidthProgrammable from 450W to 1000W TDP (CPU + GPU + memory)2x High-efficiency 2000W PSU2x PCIe gen4/5 M.2 slots on board2x/4x PCIe gen4/5 drive (NVMe)2x/3x FHFL PCIe Gen5 x161x/2x USB 3.0/3.2 ports2x RJ45 10GbE ports1x RJ45 IPMI port1x Mini display portHalogen-free LSZH power cablesStainless steel boltsVery quiet, the factory setting is 25 decibels (fan speed and thus noise level can be individually and manually configured from 0 to 100% PWM duty cycle)3 years manufacturer's warranty244 x 567 x 523 mm (20.6 x 9.6 x 22.3") or 255 x 565 x 530 mm (20.9 x 10 x 22.2") or 531 x 261 x 544 mm (20.9 x 10.3 x 21.4") or 252 x 692 x 650 mm (9.9 x 27.2 x 25.6")30 kg (66 lbs) or 33 kg (73 lbs)Optional componentsLiquid cooling (highly recommended)Bigger custom air-cooled heatsinkNIC Nvidia Bluefield-3NIC Nvidia ConnectX-7/8NIC Intel 100GWLAN + Bluetooth cardUp to 2x 8TB M.2 SSDUp to 2x 256TB 2.5" SSDStorage controllerRaid controllerAdditional USB portsMulti-display graphics cardSound cardMouseKeyboardConsumer or industrial fansIntrusion detectionOS preinstalledAnything possible on request
Need something different? We are happy to build custom systems to your liking.
Compute performance of one GH20067 teraFLOPS FP641 petaFLOPS TF322 petaFLOPS FP164 petaFLOPS FP8Benchmarksllama.cpp on GH200 624GB: https://github.com/ggml-org/llama.cpp/discussions/18005https://github.com/mag-/gpu_benchmarkGB300 Blackwell Ultra DGX Station GB300 Blackwell Ultra 775GB is available now. Be one of the first in the world to get a GB300 desktop/workstation. Order now! 
Compute performance of one GB3003 teraFLOPS FP645 petaFLOPS TF3210 petaFLOPS FP1620 petaFLOPS FP840 petaFLOPS FP4The Grace CPUThe Nvidia Grace CPU delivers twice the performance per watt of conventional x86-64 platforms and is currently the world’s fastest ARM CPU. Grace-Grace superchip workstations are available on request.
Benchmarks
https://www.phoronix.com/review/nvidia-gh200-gptshop-benchmarkhttps://www.phoronix.com/review/nvidia-gh200-amd-threadripperhttps://www.phoronix.com/review/aarch64-64k-kernel-perfhttps://www.phoronix.com/review/nvidia-gh200-compilershttps://www.phoronix.com/review/nvidia-grace-epyc-turinTry
Try before you buy. You can apply for remote testing. You will then be given login credentials for remote access. If you want to come by and see for yourself and run some tests, that is also possible any time.
Currently available for testing:
GH200 624GBApply via email: x@GPTshop.ai