NVIDIA H100

The NVIDIA H100 is a data center graphics card based on the Hopper architecture. Released on 2023-03-21 with a launch price of $30,000.00, it features 14592 CUDA cores and 138 GB of HBM3 memory.

H100 GPU

General info about H100

Architecture Hopper (GH100)
Market Segment Data Center
Release Date 2023-03-21
Launch Price $30,000.00
Manufacturing Process 4nm

Technical specs of H100

CUDA Cores 14592
Base Clock Speed 1410 MHz
Boost Clock Speed 1830 MHz
Transistor Count 80B
VRAM Capacity 138 GB HBM3
Memory Bus Width 5120 bits
Memory Bandwidth 3000.0 GB/s
TDP 700 W

Key technical parameters of the NVIDIA H100 include its 14592 CUDA cores, 1410 MHz base clock, and 1830 MHz boost clock, delivering high performance for data center applications.

Benchmark performance of H100

FP32 (float) 67 TFLOPS
FP16 (half) 2000 TFLOPS

The NVIDIA H100 achieves 67 TFLOPS in FP32 performance, making it a top contender in data center workloads.

Gaming performance of H100

The NVIDIA H100 is not designed for gaming.

Features and connectivity of H100

Tensor_cores Gen 4
Transformer_engine True
Flash_attention True
Ports NVLink 4, PCIe Gen5

Physical dimensions of H100

Length 10.5in
Width 4.4in
Height 2-slot

Rankings and efficiency of H100

Ranking Position #1
Popularity Ranking #5
Cost Effectiveness 0.8/5
Power Efficiency 1.2/5

Top GPUs for Training and Inference

Category Rank 1 Rank 2 Rank 3
Best for Training NVIDIA H200 NVIDIA H100 NVIDIA B200
Best for Inference NVIDIA A40 NVIDIA A100 NVIDIA A10
0 of 44 row(s) selected.
Page 1 of 0