NVIDIA H100

14,592

CUDA Cores

138GB

VRAM

3000

GB/s

Data Center
Updated April 21, 2026 • 2026 Edition
H100 GPU Specifications

Technical Specifications

14,592

CUDA Cores

1410

Base MHz

1830

Boost MHz

138GB HBM3

5120-bit bus

Performance

67

FP32 TFLOPS

2000

FP16 TFLOPS

700W

TDP

Cloud Availability

8

Available Instances

$1.47/hr

Starting Price

Detailed Specifications

Architecture Hopper (Unknown)
Release Date 2022-09-20
Launch Price $30,000.00
Process 4nm
Transistors 80B

AI Features

Gen 4

Tensor Cores

Enabled

Transformer Engine

Supported

Flash Attention

Physical Specifications

Dimensions

10.5in

Length

4.4in

Width

2-slot

Height

About H100 GPU

The NVIDIA H100 is a powerful GPU designed for AI/ML workloads, offering exceptional performance for both training and inference tasks. With 138GB of VRAM and 14,592 CUDA cores, it provides the memory capacity and computational power needed for modern deep learning models.

Released in 2022, the H100 features Hopper architecture with advanced AI accelerators including Tensor Cores and Transformer Engine support. This makes it ideal for large language models, computer vision tasks, and generative AI applications.

When considering cloud rental options for the H100, pricing starts at $1.47/hour from various providers. This GPU offers excellent price-to-performance for AI training workloads, with its high memory bandwidth of 3000 GB/s enabling fast data transfer for large datasets.

The H100 supports the latest CUDA compute capabilities and is compatible with all major deep learning frameworks including PyTorch, TensorFlow, and JAX. Its 4nm manufacturing process ensures efficient power consumption relative to performance output.

Rent H100 from Our Partners

Get started quickly with these trusted GPU cloud providers. We may earn a commission when you sign up.

Thunder Compute

Starting from $1.47/hr

Per-second billing, great for testing

Sign Up & Get $10 →

RunPod

Starting from $1.47/hr

Serverless with fast cold starts

Start on RunPod →

Vast.ai

Starting from $1.47/hr

Lowest prices on the market

Browse Vast.ai →

External Resources

Learn more about GPUs from these authoritative sources:

NVIDIA CUDA Documentation →

Official CUDA programming guide

NVIDIA GPU Specifications →

Official NVIDIA GPU specs

TechPowerUp GPU Database →

Comprehensive GPU specifications

CUDA Compute Capability Guide →

GPU compute capability reference

Top GPUs for Training and Inference

Category Rank 1 Rank 2 Rank 3
Best for Training NVIDIA H200 NVIDIA H100 NVIDIA B200
Best for Inference NVIDIA A40 NVIDIA A100 NVIDIA A10

Compare GPU specifications and cloud instances to find the best GPU for your workload.