DeepSeek R1 is an open-source reasoning model from DeepSeek AI. It demonstrates step-by-step chain-of-thought thinking and rivals GPT-4 on complex reasoning benchmarks. Available in 8B, 14B, 32B, 70B, and 671B MoE variants to match any budget.
Deploy DeepSeek R1 in minutes
Starting at $0.53/hr on dedicated GPU
| Model | GPU | VRAM | Price | Action |
|---|---|---|---|---|
DeepSeek R1 8B Small (8B) | L4 | 24 GB | $0.53/hr | Deploy |
DeepSeek R1 14B Medium (14B, Recommended) | L4 | 24 GB | $0.53/hr | Deploy |
DeepSeek R1 32B Medium+ (32B) | RTX A6000 | 48 GB | $0.66/hr | Deploy |
DeepSeek R1 70B Large (70B) | RTX A6000 | 48 GB | $0.66/hr | Deploy |
DeepSeek V3.1 671B Full (671B MoE) | A100 80GB PCIe | 80 GB | $1.85/hr | Deploy |
Prices include 30% service fee. Billed per minute while running.
DeepSeek R1 requires 24–80GB VRAM depending on variant. Consumer GPUs like the RTX 5080 (16GB) or RTX 4090 (24GB) may not have enough memory for larger variants.
On ModelPilot, deploy on a dedicated cloud GPU (up to 80GB VRAM) starting at $0.53/hr with no setup required.
DeepSeek R1 requires 24–80GB VRAM depending on the variant.
Starting at $0.53/hr on a dedicated GPU. Billed per minute while running, with auto-stop when credits run out.
Text models typically deploy in 5–15 minutes including model download.
You can run smaller variants locally if your GPU has enough VRAM. For larger variants or sustained production use, cloud GPUs offer more capacity and reliability.
Pick your GPU and have it running in minutes. No infrastructure setup required.