Standard Instances
Our standard instances provide a balanced mix of CPU cores, RAM and SSD local storage, covering a variety of use cases and allowing you to implement your architecture.
| RAM | CPU Cores | Min Local Storage | Max Local Storage | Price / Hour ({{ currency | uppercase }}) | |
|---|---|---|---|---|---|
| Micro | 512 MB | 1 Cores | 10 GB | 200 GB | {{ prices.opencompute.standard.micro[currency] | number:8 }} |
| Tiny | 1 GB | 1 Cores | 10 GB | 400 GB | {{ prices.opencompute.standard.tiny[currency] | number:8 }} |
| Small | 2 GB | 2 Cores | 10 GB | 400 GB | {{ prices.opencompute.standard.small[currency] | number:8 }} |
| Medium | 4 GB | 2 Cores | 10 GB | 400 GB | {{ prices.opencompute.standard.medium[currency] | number:8 }} |
| Large | 8 GB | 4 Cores | 10 GB | 400 GB | {{ prices.opencompute.standard.large[currency] | number:8 }} |
| Extra-Large | 16 GB | 4 Cores | 10 GB | 800 GB | {{ prices.opencompute.standard.extra_large[currency] | number:8 }} |
| Huge | 32 GB | 8 Cores | 10 GB | 800 GB | {{ prices.opencompute.standard.huge[currency] | number:8 }} |
| Mega | 64 GB | 12 Cores | 10 GB | 800 GB | {{ prices.opencompute.standard.mega[currency] | number:8 }} |
| Titan | 128 GB | 16 Cores | 10 GB | 1.6 TB | {{ prices.opencompute.standard.titan[currency] | number:8 }} |
| Jumbo | 225 GB | 24 Cores | 10 GB | 1.6 TB | {{ prices.opencompute.standard.jumbo[currency] | number:8 }} |
CPU Optimized Instances
CPU Optimized Instances are optimized for CPU-intensive applications, offering a higher CPU-to-memory ratio. They offer a greater computational advantage for workloads like batch processing, media decoding and encoding, network appliances, or high-performance web servers.
| RAM | CPU Cores | Min Local Storage | Max Local Storage | Price / Hour ({{ currency | uppercase }}) | |
|---|---|---|---|---|---|
| Extra-Large | 16 GB | 8 Cores | 10 GB | 800 GB | {{ prices.opencompute.cpu.extra_large[currency] | number:8 }} |
| Huge | 32 GB | 16 Cores | 10 GB | 800 GB | {{ prices.opencompute.cpu.huge[currency] | number:8 }} |
| Mega | 64 GB | 32 Cores | 10 GB | 800 GB | {{ prices.opencompute.cpu.mega[currency] | number:8 }} |
| Titan | 128 GB | 40 Cores | 10 GB | 1.6 TB | {{ prices.opencompute.cpu.titan[currency] | number:8 }} |
Memory Optimized Instances
Memory Optimized Instances deliver the best performance-to-cost ratio for memory-intensive workloads and are ideal for RAM-intensive applications. They double the memory per core with a price reduction up to almost 25 % compared to Standard Instances.
| RAM | CPU Cores | Min Local Storage | Max Local Storage | Price / Hour ({{ currency | uppercase }}) | |
|---|---|---|---|---|---|
| Extra-Large | 16 GB | 2 Cores | 10 GB | 800 GB | {{ prices.opencompute.memory.extra_large[currency] | number:8 }} |
| Huge | 32 GB | 4 Cores | 10 GB | 800 GB | {{ prices.opencompute.memory.huge[currency] | number:8 }} |
| Mega | 64 GB | 8 Cores | 10 GB | 800 GB | {{ prices.opencompute.memory.mega[currency] | number:8 }} |
| Titan | 128 GB | 12 Cores | 10 GB | 1.6 TB | {{ prices.opencompute.memory.titan[currency] | number:8 }} |
Storage Optimized Instances
Storage Optimized Instances have the same mix of CPU and RAM as our Standard Instances, but make use of larger drives, greatly expanding the overall data capacity. As a consequence, they lower the cost per GB by more than 60 %.
| RAM | CPU Cores | Min Local Storage | Max Local Storage | Price / Hour ({{ currency | uppercase }}) | |
|---|---|---|---|---|---|
| Extra-Large | 16 GB | 4 Cores | 1 TB | 2 TB | {{ prices.opencompute.storage.extra_large[currency] | number:8 }} |
| Huge | 32 GB | 8 Cores | 2 TB | 3 TB | {{ prices.opencompute.storage.huge[currency] | number:8 }} |
| Mega | 64 GB | 12 Cores | 3 TB | 5 TB | {{ prices.opencompute.storage.mega[currency] | number:8 }} |
| Titan | 128 GB | 16 Cores | 5 TB | 10 TB | {{ prices.opencompute.storage.titan[currency] | number:8 }} |
| Jumbo | 225 GB | 24 Cores | 10 TB | 15 TB | {{ prices.opencompute.storage.jumbo[currency] | number:8 }} |
GPU A30 Instances
GPU A30 instances is the versatile choice for AI inference, data analytics, and HPC workloads. Combining the latest Ampere architecture with powerful Tensor Cores and CUDA Cores, and equipped with 24 GB of high-bandwidth memory, the A30 delivers exceptional performance for enterprise AI and compute-intensive tasks.
| RAM | CPU Cores | GPU Cards | Min Local Storage | Max Local Storage | Price / Hour ({{ currency | uppercase }}) | |
|---|---|---|---|---|---|---|
| Small | 56 GB | 12 Cores | 1 GPU | 100 GB | 800 GB | {{ prices.opencompute.gpua30.small[currency] | number:8 }} |
| Medium | 90 GB | 16 Cores | 2 GPU | 100 GB | 1.2 TB | {{ prices.opencompute.gpua30.medium[currency] | number:8 }} |
| Large | 120 GB | 24 Cores | 3 GPU | 100 GB | 1.6 TB | {{ prices.opencompute.gpua30.large[currency] | number:8 }} |
| Huge | 225 GB | 48 Cores | 4 GPU | 100 GB | 1.6 TB | {{ prices.opencompute.gpua30.huge[currency] | number:8 }} |
V100 (GPU2) Instances
GPU2 based on Tesla V100 is a versatile performer for AI, HPC, data science, and engineering simulations. Built on the Volta architecture, it combines 640 Tensor Cores with high-performance CUDA Cores and 16 GB HBM2 memory, delivering exceptional throughput for compute-intensive and deep learning workloads.
| RAM | CPU Cores | GPU Cards | Min Local Storage | Max Local Storage | Price / Hour ({{ currency | uppercase }}) | |
|---|---|---|---|---|---|---|
| Small | 56 GB | 12 Cores | 1 GPU | 100 GB | 800 GB | {{ prices.opencompute.gpu2.small[currency] | number:8 }} |
| Medium | 90 GB | 16 Cores | 2 GPU | 100 GB | 1.2 TB | {{ prices.opencompute.gpu2.medium[currency] | number:8 }} |
| Large | 120 GB | 24 Cores | 3 GPU | 100 GB | 1.6 TB | {{ prices.opencompute.gpu2.large[currency] | number:8 }} |
| Huge | 225 GB | 48 Cores | 4 GPU | 100 GB | 1.6 TB | {{ prices.opencompute.gpu2.huge[currency] | number:8 }} |
A40 (GPU3) Instances
A40 (GPU3) is the all-rounder for AR, VR, Simulations, Rendering, AI and more. A combination of the latest Ampere RT Cores, Tensor Cores and CUDA Cores with 48 GB of graphics memory allows the A40 to deliver a unique set for visual computing workloads.
| RAM | CPU Cores | GPU Cards | Min Local Storage | Max Local Storage | Price / Hour ({{ currency | uppercase }}) | |
|---|---|---|---|---|---|---|
| Small | 56 GB | 12 Cores | 1 GPU | 100 GB | 800 GB | {{ prices.opencompute.gpu3.small[currency] | number:8 }} |
| Medium | 120 GB | 24 Cores | 2 GPU | 100 GB | 1.2 TB | {{ prices.opencompute.gpu3.medium[currency] | number:8 }} |
| Large | 224 GB | 48 Cores | 4 GPU | 100 GB | 1.6 TB | {{ prices.opencompute.gpu3.large[currency] | number:8 }} |
| Huge | 448 GB | 96 Cores | 8 GPU | 100 GB | 1.6 TB | {{ prices.opencompute.gpu3.huge[currency] | number:8 }} |
GPU A5000 Instances
GPU A5000 is the all-rounder for AR, VR, Simulations, Rendering, AI and more. A combination of the latest Ampere RT Cores, Tensor Cores and CUDA Cores with 24 GB of graphics memory allows the A5000 to deliver a unique set for visual computing workloads.
| RAM | CPU Cores | GPU Cards | Min Local Storage | Max Local Storage | Price / Hour ({{ currency | uppercase }}) | |
|---|---|---|---|---|---|---|
| Small | 56 GB | 12 Cores | 1 GPU | 100 GB | 800 GB | {{ prices.opencompute.gpua5000.small[currency] | number:8 }} |
| Medium | 112 GB | 24 Cores | 2 GPU | 100 GB | 1.2 TB | {{ prices.opencompute.gpua5000.medium[currency] | number:8 }} |
| Large | 224 GB | 48 Cores | 4 GPU | 100 GB | 1.6 TB | {{ prices.opencompute.gpua5000.large[currency] | number:8 }} |
GPU 3080ti Instances
GPU 3080ti is a powerful GPU featuring 10,240 CUDA cores, 320 Tensor cores, and 12GB of GDDR6X memory, designed to accelerate AI tasks in creative applications.
| RAM | CPU Cores | GPU Cards | Min Local Storage | Max Local Storage | Price / Hour ({{ currency | uppercase }}) | |
|---|---|---|---|---|---|---|
| Small | 56 GB | 12 Cores | 1 GPU | 100 GB | 800 GB | {{ prices.opencompute.gpu3080ti.small[currency] | number:8 }} |
| Medium | 112 GB | 24 Cores | 2 GPU | 100 GB | 1.2 TB | {{ prices.opencompute.gpu3080ti.medium[currency] | number:8 }} |
| Large | 224 GB | 48 Cores | 4 GPU | 100 GB | 1.6 TB | {{ prices.opencompute.gpu3080ti.large[currency] | number:8 }} |
GPU RTX Pro 6000 Instances
GPU RTX Pro 6000 instances combine NVIDIA Blackwell architecture, 96 GB GDDR7 memory per GPU, and NVMe storage throughput to handle the heaviest AI and visualization tasks. Scale up to 8 GPUs per instance for uncompromised performance in training, simulation, and rendering.
| RAM | CPU Cores | GPU Cards | Min Local Storage | Max Local Storage | Price / Hour ({{ currency | uppercase }}) | |
|---|---|---|---|---|---|---|
| Small | 120 GB | 36 Cores | 1 GPU | 100 GB | 2 TiB | {{ prices.opencompute.gpurtx6000pro.small[currency] | number:8 }} |
| Medium | 240 GB | 72 Cores | 2 GPU | 100 GB | 3 TiB | {{ prices.opencompute.gpurtx6000pro.medium[currency] | number:8 }} |
| Large | 480 GB | 144 Cores | 4 GPU | 100 GB | 5 TiB | {{ prices.opencompute.gpurtx6000pro.large[currency] | number:8 }} |
| Huge | 960 GB | 288 Cores | 8 GPU | 100 GB | 10 TiB | {{ prices.opencompute.gpurtx6000pro.huge[currency] | number:8 }} |
Additional Services
You can find additional services for your instances to set up a secure, scalable infrastructure.
| Product Details | Information | Price ({{ currency | uppercase }}) | |
|---|---|---|---|
| Recovery Service | - | Learn more | 0.12000000 |
| Custom Templates | - | Learn more | {{ prices.opencompute.template[currency] | number:8 }} |
Network
Combine various networking services with your instances. All products are simple, easy to use and fully integrated in our platform.
| Product Details | Information | Price ({{ currency | uppercase }}) | |
|---|---|---|---|
| Elastic IP | IPv4 | Learn more | {{ prices.opencompute.eip_address[currency] | number:8 }} |
| Network Load Balancer | up to 10 services | Learn more | {{ prices.opencompute.network_load_balancer[currency] | number:8 }} |
| Traffic OUTBOUND | free tier of 1.42 GB / hr | Learn more | {{ prices.opencompute.traffic[currency] | number:8 }} |
| Traffic INBOUND | - | Learn more | FREE |
| Traffic INTRA | between all instances | Learn more | FREE |
| Traffic INTERNAL | between all zones | Learn more | FREE |
| Private Network | - | Learn more | FREE |
| Private Connect | Direct Port per month | Learn more | 500.00 |
| Private Connect | One-Off Setup Cost | Learn more | 2,500.00 |
| Private Connect Equinix | Direct Port per month | Learn more | 250.00 |
| Private Connect Equinix | One-Off Setup Cost | Learn more | 1,250.00 |
Licenses
Choose your Windows License depending on your needs.
| Product Details | Information | Price ({{ currency | uppercase }}) | |
|---|---|---|---|
| Windows License | up to 7 vCPUs | Learn more | {{ prices.licenses.license_win_tier1[currency] | number:8 }} |
| Windows License | 8 or more vCPUs | Learn more | {{ prices.licenses.license_win_tier2[currency] | number:8 }} |