Perffeco ← Back to Dashboard
GPU Guide March 14, 2026 10 min read

H100 GPU Pricing Guide 2026: Every Provider Compared

The NVIDIA H100 remains the workhorse of AI training and inference. But prices have dropped 70%+ from 2023 peaks. Here's every provider compared, with hidden costs exposed.

H100 SXM 80GB — Full Price Comparison

Prices as of March 2026, sorted from cheapest to most expensive on-demand pricing:

ProviderTypeOn-DemandSpot/PreemptibleMonthly (24/7)
Vast.aiMarketplace$1.49/hr$1.19/hr$1,073
LambdaSpecialized$1.85/hrN/A$1,332
FluidStackMarketplace$2.15/hr$1.72/hr$1,548
CoreWeaveSpecialized$2.23/hrN/A$1,606
DigitalOceanCloud$2.50/hrN/A$1,800
TensorDockMarketplace$2.59/hr$1.81/hr$1,865
RunPodMarketplace$2.69/hr$1.89/hr$1,937
VultrCloud$2.85/hrN/A$2,052
RunPod SecureManaged$3.29/hrN/A$2,369
GCPHyperscaler$3.67/hr$1.47/hr$2,642
AWS (p5)Hyperscaler$3.93/hr$1.57/hr$2,830
CoreWeaveSpecialized$4.76/hrN/A$3,427
AzureHyperscaler$6.98/hr$2.79/hr$5,026

Price Drop Context

In mid-2023, H100s were $7.50-$11.00/hr on-demand. Today's cheapest at $1.49/hr represents a 80% drop. AWS's 44% price cut in June 2025 triggered an industry-wide correction that's still playing out.

Best Deals Right Now

Based on current pricing and reliability, here are our top picks:

Vast.ai
$1.49
/hr on-demand
View H100s ↗
Lambda
$1.85
/hr on-demand
View H100s ↗
RunPod
$2.69
/hr secure
View H100s ↗

Perffeco may earn a commission from provider links. This does not affect our pricing data.

Hidden Costs to Watch Out For

The hourly rate is just the beginning. Here's what catches most teams off guard:

Cost TypeTypical RangeImpact
Data Egress$0.08-0.12/GBAdds 5-15% for data-heavy workloads
Storage$0.02-0.08/GB/moModel checkpoints add up fast (100GB+ per save)
InfiniBand Premium15-30% extraRequired for multi-GPU training
Spot InterruptionsSave 60-90%But you lose progress on preemption
Enterprise Support$5-15K/moRequired for production SLAs
Reserved Commitment1-3 year terms30-60% savings but lock-in risk

When to Use Spot vs On-Demand

H100 vs Alternatives: Should You Even Use H100s?

Depending on your workload, cheaper GPUs might be a better fit:

GPUBest ForFromvs H100
A100 80GBInference, fine-tuning$1.10/hr26% cheaper, 60% perf
L40S 48GBInference, small models$0.79/hr47% cheaper, 40% perf
RTX 4090Dev/testing, small inference$0.34/hr77% cheaper, 25% perf
H200 141GBLarge models, 1.4x VRAM$3.49/hrMore VRAM, 20% faster
B200 192GBNext-gen, 2.4x VRAM$2.99/hrNewer arch, 2x faster

Bottom Line

The H100 market is a buyer's market in 2026. Prices have cratered 70%+ from peaks, and competition between marketplace providers (Vast.ai, RunPod) and hyperscalers is driving further reductions.

For most teams, Vast.ai or Lambda offer the best value for on-demand H100s. Use spot instances on GCP/AWS for interruptible workloads, and consider A100s or L40S for inference-only deployments where you don't need full H100 power.

Compare All GPU Prices in Real Time

Track live pricing across 12 providers, 32 GPU configurations, with hidden cost analysis and a GPU calculator.

Open GPU Economics Dashboard

Stop overpaying for AI infrastructure

Compare 23 LLM models and 12 GPU providers in real-time. Free dashboard, no signup needed.

Open Free Dashboard Start Pro Trial — Free 14 Days

Get the AI Cost Index — free weekly

Every Monday: price drops, GPU deals, and one FinOps tip. 500+ engineers subscribed.