The NVIDIA DGX B200 is a Data Center graphics card based on the Blackwell architecture. Released on 2024-03-18 with a launch price of $250,000.00, it features 108,800 CUDA cores and 1024 GB of HBM3 memory.
Architecture | Blackwell (B200) |
Market Segment | Data Center |
Release Date | 2024-03-18 |
Launch Price | $250,000.00 |
Manufacturing Process | 4nm |
CUDA Cores | 108,800 |
Base Clock Speed | 1300 MHz |
Boost Clock Speed | 2100 MHz |
Transistor Count | 800B |
VRAM Capacity | 1024 GB HBM3 |
Memory Bus Width | 40960 bits |
Memory Bandwidth | 24000.0 GB/s |
TDP | 8000 W |
Key technical parameters of the NVIDIA DGX B200 include its 108,800 CUDA cores, 1300 MHz base clock, and 2100 MHz boost clock, delivering high performance for data center applications.
TFLOPS
TFLOPS
The NVIDIA DGX B200 achieves 400 TFLOPS in FP32 performance, making it ideal for data center workloads.
The NVIDIA DGX B200 is not designed for gaming. It is optimized for AI training, machine learning, and high-performance computing workloads.
Gen 5
Enabled
Not Supported
NVLink 4, PCIe Gen5
Length | 20in |
Width | 10in |
Height | 8U |
Ranking Position | #1 |
Popularity Ranking | #5 |
Cost Effectiveness | 0.6/5 |
Power Efficiency | 1.5/5 |
Available Instances
Starting Price
Category | Rank 1 | Rank 2 | Rank 3 |
---|---|---|---|
Best for Training | NVIDIA H200 | NVIDIA H100 | NVIDIA B200 |
Best for Inference | NVIDIA A40 | NVIDIA A100 | NVIDIA A10 |
Compare GPU specifications and cloud instances to find the best GPU for your workload.