Marketplace GPU cloud with industry-leading pricing. No waitlists, quotas, or price gouging. Pay-per-second billing starting from $5 deposit. 100+ locations across 20+ countries. 30,000+ GPUs available including 45 different models. 99.99% uptime standard.
| Name | tensordock |
| Total Instances | 10 |
| Minimum Price | $0.12/hr |
| Maximum VRAM | 80 GB |
| Available GPU Models |
Marketplace GPU cloud with industry-leading pricing. No waitlists, quotas, or price gouging. Pay-per-second billing starting from $5 deposit. 100+ locations across 20+ countries. 30,000+ GPUs available including 45 different models. 99.99% uptime standard.
Ready to rent GPUs from tensordock? Sign up now to explore available instances and start your AI workloads.
Visit Provider Website →North America, Europe, Asia, Oceania, United States, Canada, Germany, United Kingdom, France, Netherlands, Singapore, Japan, Australia
9
10
Pay-per-second GPU cloud with no waitlists
| Accelerator | Price/Hour | VRAM | Type | Action |
|---|---|---|---|---|
| H100 | $2.27 | 80 GB | GPU | View GPU → |
| A800 80GB | $1.81 | 80 GB | GPU | View GPU → |
| A800 80GB | $1.51 | 80 GB | GPU | View GPU → |
| Tesla V100 | $0.18 | 16 GB | GPU | View GPU → |
| L40S | $0.96 | 48 GB | GPU | View GPU → |
| RTX 4090 | $0.36 | 24 GB | GPU | View GPU → |
| RTX 3090 | $0.21 | 24 GB | GPU | View GPU → |
| RTX 6000 Ada | $0.77 | 48 GB | GPU | View GPU → |
| A6000 | $0.47 | 48 GB | GPU | View GPU → |
| RTX A4000 | $0.12 | 16 GB | GPU | View GPU → |
Explore GPU specifications and compare pricing for tensordock
Explore alternative GPU cloud providers and compare pricing
Check CUDA compute capability and AI feature support for different GPUs
View Reference →tensordock is a leading GPU cloud provider offering 10 instances across 9 different GPU models. With pricing starting at $0.12/hour, they provide competitive options for AI training, inference, and high-performance computing workloads.
Their infrastructure spans 13 regions, making it easy to deploy GPU instances close to your users or data sources. The provider supports popular NVIDIA GPUs including A6000, A800 80GB, H100, enabling a wide range of AI/ML applications from deep learning training to real-time inference.
When choosing tensordock for your GPU cloud needs, consider factors like pricing, regional availability, and supported GPU models. Their platform integrates with popular ML frameworks like PyTorch, TensorFlow, and JAX, making it straightforward to migrate existing workloads or start new projects.
For cost optimization, compare tensordock's pricing with other providers using our cost estimator tool. Many users find that tensordock offers competitive rates for long-running training jobs or high-throughput inference workloads, especially when utilizing their spot or preemptible instance options.
Learn more about GPUs from these authoritative sources:
Official CUDA programming guide
NVIDIA GPU Specifications →Official NVIDIA GPU specs
TechPowerUp GPU Database →Comprehensive GPU specifications
CUDA Compute Capability Guide →GPU compute capability reference
Visit tensordock's website to create an account and start using their GPU instances.
Visit tensordock →