NVIDIA H100

The NVIDIA H100 is a Data Center graphics card based on the Hopper architecture. Released on 2023-03-21 with a launch price of $30,000.00, it features 14,592 CUDA cores and 138 GB of HBM3 memory.

H100 GPU

General info about H100

Architecture Hopper (GH100)
Market Segment Data Center
Release Date 2023-03-21
Launch Price $30,000.00
Manufacturing Process 4nm

Technical specs of H100

CUDA Cores 14,592
Base Clock Speed 1410 MHz
Boost Clock Speed 1830 MHz
Transistor Count 80B
VRAM Capacity 138 GB HBM3
Memory Bus Width 5120 bits
Memory Bandwidth 3000.0 GB/s
TDP 700 W

Key Technical Parameters

Key technical parameters of the NVIDIA H100 include its 14,592 CUDA cores, 1410 MHz base clock, and 1830 MHz boost clock, delivering high performance for data center applications.

Benchmark performance of H100

FP32 Performance

67

TFLOPS

FP16 Performance

2000

TFLOPS

The NVIDIA H100 achieves 67 TFLOPS in FP32 performance, making it ideal for data center workloads.

Gaming performance of H100

Gaming Performance

The NVIDIA H100 is not designed for gaming. It is optimized for AI training, machine learning, and high-performance computing workloads.

Features and connectivity of H100

Tensor Cores

Gen 4

Transformer Engine

Enabled

Flash Attention

Supported

Connectivity

NVLink 4, PCIe Gen5

Physical dimensions of H100

Length 10.5in
Width 4.4in
Height 2-slot

Rankings and efficiency of H100

Ranking Position #1
Popularity Ranking #5
Cost Effectiveness 0.8/5
Power Efficiency 1.2/5

Cloud Availability

Available Instances

1

Starting Price

$2.80/hr

Top GPUs for Training and Inference

Category Rank 1 Rank 2 Rank 3
Best for Training NVIDIA H200 NVIDIA H100 NVIDIA B200
Best for Inference NVIDIA A40 NVIDIA A100 NVIDIA A10

Compare GPU specifications and cloud instances to find the best GPU for your workload.