AIME G500 - Multi GPU Workstation | AIME

5 min read Original article ↗

Definable GPU Configuration

Choose the desired configuration among the most powerful NVIDIA GPUs for Deep Learning and Rendering:

Up to 2x NVIDIA RTX Pro 6000 Blackwell Workstation 96GB

The first GPU of NVIDIAs next generation Blackwell is the RTX Pro 6000 Workstation Edition with unbeaten 24.064 CUDA and 752 Tensor cores of the 5th generation.
With the memory increase to 96 GB GDDR7 GPU memory with an impressive 1.6 TB/s memory bandwidth, it is the inception of a new standard in GPU computing. The NVIDIA RTX Pro 6000 Blackwell Workstation Edition is currently the most powerful workstation GPU available.

Up to 4x NVIDIA RTX Pro 6000 Blackwell Max-Q 96GB

The NVIDIA RTX Pro 6000 Blackwell Max-Q has the same technical specifications as the Workstation Edition with two important differences: it is limit to efficient 300W power intake and comes in active blower GPU format. The dense 2-slot format of the RTX Pro 6000 Blackwell Max-Q allows to pack up to four GPUs in a AIME G500 workstation.
For GPU memory demanding tasks, like large language models, a 4x RTX Pro 6000 Blackwell Max-Q setup has a total of 384 GB fast GPU memory, this allows to run LLM models with up to 320B parameters in fp8 or even 640B parameters in fp4 resolution.

Up to 4x NVIDIA RTX Pro 5000 Blackwell 48GB

The RTX PRO 5000 is a the workhorse of the of NVIDIAs latest Blackwell GPU generation generation. With its 14,080 CUDA, 440 Tensor cores of the 5th generation, 48 GB GDDR7 memory and a energy profile of 300 Watts. The RTX PRO 5000 Blackwell is well ahead of all previous NVIDIA GPU generations with 48GB memory like the RTX A6000 / RTX 6000 Ada due to its superior performance and even better price/performance ratio.

Up to 4x NVIDIA RTX Pro 4500 Blackwell 32GB

The RTX PRO 4500 of NVIDIAs latest Blackwell GPU generation is a strong and efficient entry card. With its 10,496 CUDA, 328 Tensor cores of the 5th generation and 32 GB GDDR7 memory and a low energy profile of 200 Watts power usage, it is well ahead of all previous NVIDIA generations of RTX A4500/4500 Ada and even RTX A5000/5000 Ada series cards in performance, performance/price and memory capabilities.

Up to 2x NVIDIA RTX 5090 32GB

The Geforce™ RTX 5090 is the flagship of NVIDIA Geforce Blackwell GPU generation. It is the direct succesor of the RTX 4090. The RTX 5090 enables 680 fifth-generation Tensor Cores and 21.760 next-gen CUDA® cores with 32GB of GDDR7 graphics memory for unprecedented rendering, AI, graphics, and compute performance. Due to the tripple level fan architecture the noise level of the RTX 5090 is a suitable solution for running such powerfull GPUs in an office environment.

All NVIDIA GPUs are supported by NVIDIA’s CUDA-X AI SDK, including cuDNN, TensorRT which power nearly all popular deep learning frameworks.

Threadripping Pro CPU Performance

The high-end AMD Threadripper Pro 7000 and Pro 9000 CPU series designed for workstations delivers up to 96 cores with a total of 192 threads per CPU with an unbeaten price performance ratio supporting the latest DDR5 memory and PCIe 5.0 technology.

The available 128 PCIe 5.0 lanes of the AMD Threadripper Pro CPU allow highest interconnect and data transfer rates between all GPUs and the CPU.

A large amount of available CPU cores can improve the performance immensely in case the CPU is used for tasks like prepossessing and delivering of data to optimal feed the GPUs with workloads.

More than 40 TB High-Speed SSD Storage

Deep Learning is most often linked to high amount of data to be processed and stored. A high throughput and fast access time to the data are essential for fast turn around times.

The AIME G500 can be configured with up to two 8 TB onboard M.2 PCIe 5.0 NVMe SSDs and additional in the front bays two U.2 NVME SSDs with up to 15.36 TB each. In total more then 40 TB high speed SSD storage is possible.

All SSDs are connected by PCIe lanes directly to CPU and main memory with up to 6000 MB/s read, 5000 MB/s write rates.

A Workstation suitable for Office and Server Room

The AIME G500 was designed as an office compatible PC workstation with server grade hardware. We recommend to limit the configuration with a maximum of two GPUs for use in an office environment.

When setup in an air ventilated dedicated server room the use of up to four GPUs with no restraints is possible. The G500 supports IPMI LAN and a BMC (Board Management Controller) to remote control and monitor the hardware, essential for serious server setups.

Well Balanced Components

All of our components have been selected for their energy efficiency, durability, compatibility and high performance. They are perfectly balanced, so there are no performance bottlenecks. We optimize our hardware in terms of cost per performance, without compromising endurance and reliability.

Tested with Real Life Deep Learning Applications

The AIME G500 was first designed for our own deep learning application needs and evolved in years of experience in deep learning frameworks and customized PC hardware building.

Our machines come with preinstalled Linux OS configured with latest drivers and frameworks like PyTorch and Tensorflow. Just login and start right away with your favorite Deep Learning framework.