The NVIDIA A100 is a Data Center graphics card based on the Ampere architecture. Released on 2020-05-14 with a launch price of $10,000.00, it features 6,912 CUDA cores and 40 GB of HBM2e memory.
Architecture | Ampere (GA100) |
Market Segment | Data Center |
Release Date | 2020-05-14 |
Launch Price | $10,000.00 |
Manufacturing Process | 7nm |
CUDA Cores | 6,912 |
Base Clock Speed | 1410 MHz |
Boost Clock Speed | 1410 MHz |
Transistor Count | 54.2B |
VRAM Capacity | 40 GB HBM2e |
Memory Bus Width | 5120 bits |
Memory Bandwidth | 1555.0 GB/s |
TDP | 250 W |
Key technical parameters of the NVIDIA A100 include its 6,912 CUDA cores, 1410 MHz base clock, and 1410 MHz boost clock, delivering high performance for data center applications.
TFLOPS
TFLOPS
The NVIDIA A100 achieves 19.5 TFLOPS in FP32 performance, making it ideal for data center workloads.
The NVIDIA A100 is not designed for gaming. It is optimized for AI training, machine learning, and high-performance computing workloads.
Gen 3
Disabled
Not Supported
NVLink 3, PCIe Gen4
Length | 10.5in |
Width | 4.4in |
Height | 2-slot |
Ranking Position | #3 |
Popularity Ranking | #3 |
Cost Effectiveness | 1.0/5 |
Power Efficiency | 1.3/5 |
Available Instances
Starting Price
Category | Rank 1 | Rank 2 | Rank 3 |
---|---|---|---|
Best for Training | NVIDIA H200 | NVIDIA H100 | NVIDIA B200 |
Best for Inference | NVIDIA A40 | NVIDIA A100 | NVIDIA A10 |
Compare GPU specifications and cloud instances to find the best GPU for your workload.