NVIDIA DGX Station

6 min read Original article ↗
NVIDIA DGX Station

The ultimate deskside AI supercomputer, powered by NVIDIA Grace Blackwell.

Order Now

Contact a partner to order your system.

The Ultimate Deskside AI Supercomputer

NVIDIA DGX Station™ is the ultimate deskside supercomputer for building and running AI. Powered by the NVIDIA GB300 Grace™ Blackwell Ultra Desktop Superchip, it features a massive 748 GB of coherent memory and up to 20 petaFLOPS of AI compute performance, enabling seamless development and execution of massive AI workloads, supporting models up to 1 trillion parameters.  Preconfigured with an NVIDIA AI software stack, developers, researchers, and data scientists can rapidly develop, fine-tune, and inference with large AI models and long-running AI agents locally and seamlessly deploy to the data center or cloud.

NVIDIA Announces NemoClaw for OpenClaw Community

NemoClaw adds security and privacy to run secure, always-on AI assistants on NVIDIA RTX PCs, DGX Station, and DGX Spark.

DGX Spark and DGX Station with NVIDIA NemoClaw for Autonomous Agents

The ultimate platform for locally developing and deploying secure, supercomputing-intelligent, always-on AI agents.

Features

Technological Breakthroughs

NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip

The NVIDIA DGX Station features the NVIDIA Blackwell Ultra GPU connected to a high-performance NVIDIA Grace CPU using the NVIDIA NVLink®-C2C interconnect to deliver best-in-class system communication and performance.

NVIDIA Blackwell Generation Tensor Cores

NVIDIA DGX Station features NVIDIA Blackwell Generation Tensor Cores, enabling 4-bit floating point (NVFP4) AI. FP4 increases the performance and size of next-generation models that memory can support while maintaining high accuracy.

NVIDIA ConnectX-8 SuperNIC

The NVIDIA ConnectX®-8 SuperNIC™ supercharges hyperscale AI computing workloads. With up to 800 gigabits per second (Gb/s), ConnectX-8 SuperNIC delivers extremely fast, efficient network connectivity, and supports linking up to two DGX Stations to scale model capacity and performance even further.

Preconfigured AI Software Stack

DGX Station features an optimized, preconfigured operating system, Ubuntu with NVIDIA AI Developer Tools. This environment comes pre-installed with system-specific configurations and essential CUDA-X™ libraries, providing a seamless, full-stack foundation for accelerating AI development, data science, and local inference workloads.

NVIDIA NVLink-C2C Interconnect

NVIDIA NVLink-C2C extends the industry-leading NVLink technology to a chip-to-chip interconnect between the GPU and CPU, enabling high-bandwidth coherent data transfers between processors and accelerators.

Large Coherent Memory for AI

AI models continue to grow in scale and complexity. Training and running these models within NVIDIA Grace Blackwell Ultra’s large coherent memory allows for massive-scale models to be trained and run efficiently within one memory pool, thanks to the C2C superchip interconnect that bypasses the bottlenecks of traditional CPU and GPU systems.

Workload Optimized Power-Shifting

NVIDIA DGX Stations take advantage of AI-based system optimizations that intelligently shift power based on the currently active workload, continually maximizing performance and efficiency.

NVIDIA RTX PRO GPU

NVIDIA DGX Station can be configured with up to one additional NVIDIA RTX PRO™ Blackwell Generation GPU in addition to the GB300 Superchip. This combination gives developers the ability to pair data center-grade AI compute capability with powerful, ray-traced visualization and simulation.

Workloads

The Ultimate AI and Data Science Desktop

AI Development

With an extensive library of NVIDIA CUDA-optimized libraries to accelerate deep learning and machine learning training, combined with NVIDIA DGX Station’s massive memory and superchip throughput, NVIDIA’s accelerated computing platform provides the ultimate AI development platform across industries, from developing predictive maintenance, to medical imaging analysis, to natural language processing applications.

AI Development

Data Science

With NVIDIA AI software, including RAPIDS™ open-source software libraries, GPUs substantially reduce infrastructure costs and provide superior performance for end-to-end data science workflows. The large coherent memory pool provided by the NVIDIA DGX Station superchip allows for massive data lakes to be ingested directly into memory, removing data science bottlenecks and accelerating throughput.

Data Science

AI Inference

AI models are rapidly expanding in size, complexity, and diversity—pushing the boundaries of what’s possible. NVIDIA DGX Stations accelerate inferencing for running large AI models locally on the system, delivering incredible speed for large language model (LLM) token generation, data analysis, content creation, AI chatbots, and more. 

AI Inference

Run Autonomous Agents More Safely

NVIDIA NemoClaw is an open source stack that adds privacy and security controls to OpenClaw. With one command, anyone can run always-on, self-evolving agents anywhere.

Artistic image depicting NVIDIA OpenShell

Physical AI

DGX Station features a powerful GB300 superchip and can be combined with an additional RTX PRO 6000 to deliver best-in-class compute, simulation, and visualization on the desktop. This combination is ideal for building and deploying Physical AI—which requires powerful local training, inference, and simulation to train, test, and continually optimize autonomous systems and visual AI agents.

Photo of robots at work

Personal Cloud

The NVIDIA DGX Station can serve as an individual desktop for one user to run advanced AI models using local data, or as a centralized compute node for multiple team members to fine-tune and run IP-specific models on-demand. DGX Station supports NVIDIA Multi-Instance GPU (MIG) technology to partition into as many as seven instances for local development with multiple users, each fully isolated with its own high-bandwidth memory, cache, and compute cores – and easily scaled out to larger MIG deployments in the cloud and data center. This gives administrators the ability to support every workload, from the smallest to the largest, with guaranteed quality of service (QoS) and extends the reach of accelerated computing resources to every user.

Personal Cloud

NVIDIA GB200 Developer Kit

For HPC (high-performance computing) workloads, such as CAE (computer-aided engineering), the NVIDIA GB200 Developer Kit gives developers powerful performance, FP64 compute, and large coherent memory in a deskside system to develop, run, and test applications.

Specifications

DGX Station Specifications*

1x NVIDIA Blackwell Ultra

1x Grace 72-Core Neoverse V2

496 GB LPDDR5X | 396 GB/s

Networking | Peak Bandwidth

NVIDIA ConnectX-8 SuperNIC | Up to 800 Gb/s | Ethernet

Ubuntu with NVIDIA AI Developer Tools

NVIDIA RTX PRO 6000 Workstation Edition
RTX PRO 6000 Blackwell Max-Q Workstation Edition
RTX PRO 4000 Blackwell SFF Edition
RTX PRO 2000 Blackwell

2x QSFP112 (400 Gbs per port)
1x RJ45 10 GbE
1x RJ45 1 GbE (BMC for system management)

1x PCIe Gen 5 x16
2x PCIe Gen 5 x16 (x8 electrical)

Additional Ports and Connectors4

Front IO: 2x USB Type C (USB3.1), 2x USB Type A (USB3.1), Audio
Rear IO: 4x USB Type A, USB Micro-B (BMC) mDP5 (BMC), Audio 

  1. Peak rates are based on GPU boost clock.
  2. All Tensor Core specifications are with sparsity unless otherwise noted.
  3. Without sparsity.
  4. Support may vary by system. Check with your OEM to confirm specific support and configurations.
  5. Mini DisplayPort is intended only for system management.

Playbooks for Everyone

Leverage our curated playbooks to get started on various AI projects on DGX Station. Playbooks contain step-by-step recipes to demonstrate the art of the possible.

Partners

Built by Global OEM Partners. Coming in 2026.

Sign up to be notified when DGX Station becomes available

Welcome back. Not you? Log Out