AMD AMD MI300X

0

CUDA Cores

192GB

VRAM

5300

GB/s

Data Center
Updated April 21, 2026 • 2026 Edition
AMD MI300X GPU Specifications

Technical Specifications

0

CUDA Cores

2100

Base MHz

2100

Boost MHz

192GB HBM3

8192-bit bus

Performance

163.4

FP32 TFLOPS

1307.4

FP16 TFLOPS

750W

TDP

Cloud Availability

1

Available Instances

$14.80/hr

Starting Price

Detailed Specifications

Architecture CDNA 3 (Unknown)
Release Date 2023-12-06
Launch Price $2,000.00
Process 5nm | 6nm FinFET
Transistors 153B

AI Features

Gen 3

Tensor Cores

Disabled

Transformer Engine

Not Supported

Flash Attention

Physical Specifications

Additional Specifications

Passive

Cooling

OAM Module

Form Factor

About AMD MI300X GPU

The AMD AMD MI300X is a powerful GPU designed for AI/ML workloads, offering exceptional performance for both training and inference tasks. With 192GB of VRAM and 0 CUDA cores, it provides the memory capacity and computational power needed for modern deep learning models.

Released in 2023, the AMD MI300X features CDNA 3 architecture with advanced AI accelerators including Tensor Cores and Transformer Engine support. This makes it ideal for large language models, computer vision tasks, and generative AI applications.

When considering cloud rental options for the AMD MI300X, pricing starts at $14.80/hour from various providers. This GPU offers excellent price-to-performance for AI training workloads, with its high memory bandwidth of 5300 GB/s enabling fast data transfer for large datasets.

The AMD MI300X supports the latest CUDA compute capabilities and is compatible with all major deep learning frameworks including PyTorch, TensorFlow, and JAX. Its 5nm | 6nm FinFET manufacturing process ensures efficient power consumption relative to performance output.

Rent AMD MI300X from Our Partners

Get started quickly with these trusted GPU cloud providers. We may earn a commission when you sign up.

Thunder Compute

Starting from $14.80/hr

Per-second billing, great for testing

Sign Up & Get $10 →

RunPod

Starting from $14.80/hr

Serverless with fast cold starts

Start on RunPod →

Vast.ai

Starting from $14.80/hr

Lowest prices on the market

Browse Vast.ai →

External Resources

Learn more about GPUs from these authoritative sources:

NVIDIA CUDA Documentation →

Official CUDA programming guide

NVIDIA GPU Specifications →

Official NVIDIA GPU specs

TechPowerUp GPU Database →

Comprehensive GPU specifications

CUDA Compute Capability Guide →

GPU compute capability reference

Top GPUs for Training and Inference

Category Rank 1 Rank 2 Rank 3
Best for Training NVIDIA H200 NVIDIA H100 NVIDIA B200
Best for Inference NVIDIA A40 NVIDIA A100 NVIDIA A10

Compare GPU specifications and cloud instances to find the best GPU for your workload.