CUDA Cores
VRAM
GB/s
18,176
CUDA Cores
1155
Base MHz
2520
Boost MHz
48GB GDDR6
384-bit bus
90
FP32 TFLOPS
180
FP16 TFLOPS
300W
TDP
4
Available Instances
$0.32/hr
Starting Price
| Architecture | Ada Lovelace (Unknown) |
| Release Date | 2023-03-21 |
| Launch Price | $2,000.00 |
| Process | 4nm |
| Transistors | 76.3B |
Gen 4
Tensor Cores
Enabled
Transformer Engine
Not Supported
Flash Attention
10.5in
Length
4.4in
Width
2-slot
Height
The NVIDIA L40 is a powerful GPU designed for AI/ML workloads, offering exceptional performance for both training and inference tasks. With 48GB of VRAM and 18,176 CUDA cores, it provides the memory capacity and computational power needed for modern deep learning models.
Released in 2023, the L40 features Ada Lovelace architecture with advanced AI accelerators including Tensor Cores and Transformer Engine support. This makes it ideal for large language models, computer vision tasks, and generative AI applications.
When considering cloud rental options for the L40, pricing starts at $0.32/hour from various providers. This GPU offers excellent price-to-performance for AI training workloads, with its high memory bandwidth of 864 GB/s enabling fast data transfer for large datasets.
The L40 supports the latest CUDA compute capabilities and is compatible with all major deep learning frameworks including PyTorch, TensorFlow, and JAX. Its 4nm manufacturing process ensures efficient power consumption relative to performance output.
Get started quickly with these trusted GPU cloud providers. We may earn a commission when you sign up.
Learn more about GPUs from these authoritative sources:
Official CUDA programming guide
NVIDIA GPU Specifications →Official NVIDIA GPU specs
TechPowerUp GPU Database →Comprehensive GPU specifications
CUDA Compute Capability Guide →GPU compute capability reference
| Category | Rank 1 | Rank 2 | Rank 3 |
|---|---|---|---|
| Best for Training | NVIDIA H200 | NVIDIA H100 | NVIDIA B200 |
| Best for Inference | NVIDIA A40 | NVIDIA A100 | NVIDIA A10 |
Compare GPU specifications and cloud instances to find the best GPU for your workload.