NVIDIA RTX 4080 Super

10,240

CUDA Cores

16GB

VRAM

736

GB/s

Consumer/Gaming
Updated April 21, 2026 • 2026 Edition
RTX 4080 Super GPU Specifications

Technical Specifications

10,240

CUDA Cores

2295

Base MHz

2550

Boost MHz

16GB GDDR6X

256-bit bus

Performance

40

FP32 TFLOPS

80

FP16 TFLOPS

320W

TDP

Cloud Availability

1

Available Instances

$0.20/hr

Starting Price

Detailed Specifications

Architecture Ada Lovelace (Unknown)
Release Date 2024-01-31
Launch Price $999.00
Process 5nm
Transistors 45.9 Billion

AI Features

Gen 3

Tensor Cores

Disabled

Transformer Engine

Not Supported

Flash Attention

About RTX 4080 Super GPU

The NVIDIA RTX 4080 Super is a powerful GPU designed for AI/ML workloads, offering exceptional performance for both training and inference tasks. With 16GB of VRAM and 10,240 CUDA cores, it provides the memory capacity and computational power needed for modern deep learning models.

Released in 2024, the RTX 4080 Super features Ada Lovelace architecture with advanced AI accelerators including Tensor Cores and Transformer Engine support. This makes it ideal for large language models, computer vision tasks, and generative AI applications.

When considering cloud rental options for the RTX 4080 Super, pricing starts at $0.20/hour from various providers. This GPU offers excellent price-to-performance for AI training workloads, with its high memory bandwidth of 736 GB/s enabling fast data transfer for large datasets.

The RTX 4080 Super supports the latest CUDA compute capabilities and is compatible with all major deep learning frameworks including PyTorch, TensorFlow, and JAX. Its 5nm manufacturing process ensures efficient power consumption relative to performance output.

Rent RTX 4080 Super from Our Partners

Get started quickly with these trusted GPU cloud providers. We may earn a commission when you sign up.

Thunder Compute

Starting from $0.20/hr

Per-second billing, great for testing

Sign Up & Get $10 →

RunPod

Starting from $0.20/hr

Serverless with fast cold starts

Start on RunPod →

Vast.ai

Starting from $0.20/hr

Lowest prices on the market

Browse Vast.ai →

External Resources

Learn more about GPUs from these authoritative sources:

NVIDIA CUDA Documentation →

Official CUDA programming guide

NVIDIA GPU Specifications →

Official NVIDIA GPU specs

TechPowerUp GPU Database →

Comprehensive GPU specifications

CUDA Compute Capability Guide →

GPU compute capability reference

Top GPUs for Training and Inference

Category Rank 1 Rank 2 Rank 3
Best for Training NVIDIA H200 NVIDIA H100 NVIDIA B200
Best for Inference NVIDIA A40 NVIDIA A100 NVIDIA A10

Compare GPU specifications and cloud instances to find the best GPU for your workload.