NVIDIA A100

The NVIDIA A100 is a Data Center graphics card based on the Ampere architecture. Released on 2020-05-14 with a launch price of $10,000.00, it features 6,912 CUDA cores and 40 GB of HBM2e memory.

A100 GPU

General info about A100

Architecture Ampere (GA100)
Market Segment Data Center
Release Date 2020-05-14
Launch Price $10,000.00
Manufacturing Process 7nm

Technical specs of A100

CUDA Cores 6,912
Base Clock Speed 1410 MHz
Boost Clock Speed 1410 MHz
Transistor Count 54.2B
VRAM Capacity 40 GB HBM2e
Memory Bus Width 5120 bits
Memory Bandwidth 1555.0 GB/s
TDP 250 W

Key Technical Parameters

Key technical parameters of the NVIDIA A100 include its 6,912 CUDA cores, 1410 MHz base clock, and 1410 MHz boost clock, delivering high performance for data center applications.

Benchmark performance of A100

FP32 Performance

19.5

TFLOPS

FP16 Performance

312

TFLOPS

The NVIDIA A100 achieves 19.5 TFLOPS in FP32 performance, making it ideal for data center workloads.

Gaming performance of A100

Gaming Performance

The NVIDIA A100 is not designed for gaming. It is optimized for AI training, machine learning, and high-performance computing workloads.

Features and connectivity of A100

Tensor Cores

Gen 3

Transformer Engine

Disabled

Flash Attention

Not Supported

Connectivity

NVLink 3, PCIe Gen4

Physical dimensions of A100

Length 10.5in
Width 4.4in
Height 2-slot

Rankings and efficiency of A100

Ranking Position #3
Popularity Ranking #3
Cost Effectiveness 1.0/5
Power Efficiency 1.3/5

Cloud Availability

Available Instances

1

Starting Price

$1.29/hr

Top GPUs for Training and Inference

Category Rank 1 Rank 2 Rank 3
Best for Training NVIDIA H200 NVIDIA H100 NVIDIA B200
Best for Inference NVIDIA A40 NVIDIA A100 NVIDIA A10

Compare GPU specifications and cloud instances to find the best GPU for your workload.