The ultimate deskside AI supercomputer, powered by NVIDIA Grace Blackwell.
Order Now
Contact a partner to order your system.
The Ultimate Deskside AI Supercomputer
NVIDIA DGX Station™ is the ultimate deskside supercomputer for building and running AI. Powered by the NVIDIA GB300 Grace™ Blackwell Ultra Desktop Superchip, it features a massive 748 GB of coherent memory and up to 20 petaFLOPS of AI compute performance, enabling seamless development and execution of massive AI workloads, supporting models up to 1 trillion parameters. Preconfigured with an NVIDIA AI software stack, developers, researchers, and data scientists can rapidly develop, fine-tune, and inference with large AI models and long-running AI agents locally and seamlessly deploy to the data center or cloud.
Features
Technological Breakthroughs
Workloads
The Ultimate AI and Data Science Desktop
AI Development
With an extensive library of NVIDIA CUDA-optimized libraries to accelerate deep learning and machine learning training, combined with NVIDIA DGX Station’s massive memory and superchip throughput, NVIDIA’s accelerated computing platform provides the ultimate AI development platform across industries, from developing predictive maintenance, to medical imaging analysis, to natural language processing applications.
Data Science
With NVIDIA AI software, including RAPIDS™ open-source software libraries, GPUs substantially reduce infrastructure costs and provide superior performance for end-to-end data science workflows. The large coherent memory pool provided by the NVIDIA DGX Station superchip allows for massive data lakes to be ingested directly into memory, removing data science bottlenecks and accelerating throughput.
AI Inference
AI models are rapidly expanding in size, complexity, and diversity—pushing the boundaries of what’s possible. NVIDIA DGX Stations accelerate inferencing for running large AI models locally on the system, delivering incredible speed for large language model (LLM) token generation, data analysis, content creation, AI chatbots, and more.
Run Autonomous Agents More Safely
NVIDIA NemoClaw is an open source stack that adds privacy and security controls to OpenClaw. With one command, anyone can run always-on, self-evolving agents anywhere.
Physical AI
DGX Station features a powerful GB300 superchip and can be combined with an additional RTX PRO 6000 to deliver best-in-class compute, simulation, and visualization on the desktop. This combination is ideal for building and deploying Physical AI—which requires powerful local training, inference, and simulation to train, test, and continually optimize autonomous systems and visual AI agents.
Personal Cloud
The NVIDIA DGX Station can serve as an individual desktop for one user to run advanced AI models using local data, or as a centralized compute node for multiple team members to fine-tune and run IP-specific models on-demand. DGX Station supports NVIDIA Multi-Instance GPU (MIG) technology to partition into as many as seven instances for local development with multiple users, each fully isolated with its own high-bandwidth memory, cache, and compute cores – and easily scaled out to larger MIG deployments in the cloud and data center. This gives administrators the ability to support every workload, from the smallest to the largest, with guaranteed quality of service (QoS) and extends the reach of accelerated computing resources to every user.
NVIDIA GB200 Developer Kit
For HPC (high-performance computing) workloads, such as CAE (computer-aided engineering), the NVIDIA GB200 Developer Kit gives developers powerful performance, FP64 compute, and large coherent memory in a deskside system to develop, run, and test applications.
Specifications
DGX Station Specifications*
1x NVIDIA Blackwell Ultra
1x Grace 72-Core Neoverse V2
496 GB LPDDR5X | 396 GB/s
Networking | Peak Bandwidth
NVIDIA ConnectX-8 SuperNIC | Up to 800 Gb/s | Ethernet
Ubuntu with NVIDIA AI Developer Tools
NVIDIA RTX PRO 6000 Workstation Edition
RTX PRO 6000 Blackwell Max-Q Workstation Edition
RTX PRO 4000 Blackwell SFF Edition
RTX PRO 2000 Blackwell
2x QSFP112 (400 Gbs per port)
1x RJ45 10 GbE
1x RJ45 1 GbE (BMC for system management)
1x PCIe Gen 5 x16
2x PCIe Gen 5 x16 (x8 electrical)
Additional Ports and Connectors4
Front IO: 2x USB Type C (USB3.1), 2x USB Type A (USB3.1), Audio
Rear IO: 4x USB Type A, USB Micro-B (BMC) mDP5 (BMC), Audio
- Peak rates are based on GPU boost clock.
- All Tensor Core specifications are with sparsity unless otherwise noted.
- Without sparsity.
- Support may vary by system. Check with your OEM to confirm specific support and configurations.
- Mini DisplayPort is intended only for system management.
Playbooks for Everyone
Leverage our curated playbooks to get started on various AI projects on DGX Station. Playbooks contain step-by-step recipes to demonstrate the art of the possible.
Partners