We Ran 7,600+ Cloud Provisioning Tests Across AWS, Azure, and GCP — Here's What We Found

Published: (April 19, 2026 at 05:30 AM EDT)
3 min read
Source: Dev.to

Source: Dev.to

Introduction

Nobody publishes this data, so we measured it ourselves. Cloud providers share uptime SLAs, pricing calculators, and feature comparison tables, but they don’t reveal how long it actually takes to provision infrastructure—or how often provisioning fails. To fill that gap we built ProvisioningIQ, which continuously runs real API calls (no simulations) to provision and then destroy resources across AWS, Azure, and GCP.

Methodology

  • Scope: 7,600+ real provisioning tests (VMs or serverless containers).
  • Frequency: 3 × day, across three regions per cloud, running continuously since January 2026.
  • Process:
    1. Provision a real resource (VM or serverless container).
    2. Measure time at each phase: API accepted → allocating → ready → reachable.
    3. Record success/failure and failure category.
    4. Immediately destroy the resource.

Container Provisioning Times

CloudServicep50 Latencyp95 LatencySuccess Rate
GCPCloud Run6–8 s~20 s100 %
AWSECS~20 s~40 s100 %
AzureACI~40 s~60 s100 %

Observation: GCP Cloud Run provisions 10–20× faster than Azure ACI at the p50 level, and this advantage is consistent across all tested regions.

VM Provisioning Times

CloudServicep50 LatencySuccess Rate
AWSEC2~34 s99.8 %
AzureVM72–86 s99.7 %
GCPGCE~100 s98.5 %

Observation: AWS leads on VMs with the fastest p50 latency and the highest reliability. GCP’s VMs are noticeably slower than its containers, making Cloud Run the preferred GCP option for latency‑sensitive workloads.

Key Takeaways

  • On‑call impact: Engineers deal with the p95, not the average.

    • AWS containers p95: ~40 s
    • Azure containers p95: ~60 s
    • GCP containers p95: ~20 s
  • Incident response: A 20‑second recovery (GCP) vs. a 60‑second recovery (Azure) can be the difference between users noticing an outage or not.

  • Regional variability: Provisioning times vary meaningfully between regions. Maintenance windows can temporarily double provisioning times in specific regions, and providers do not warn about these spikes.

  • Decision factors:

    1. Auto‑scaling under load
    2. Disaster recovery speed
    3. CI/CD pipeline velocity

    Faster provisioning translates into tangible engineering time savings (e.g., ~144 hours recovered per team annually).

Additional Insights

  • Negotiation gap: Cloud contracts typically cover price, storage, network egress, and uptime SLA, but never provisioning latency. There is no industry‑wide commitment or benchmark for this metric.

  • Future benchmarking: We are extending our measurements to managed databases (RDS PostgreSQL, Cloud SQL, Azure Database for PostgreSQL) and to Terraform step‑level timings, which will pinpoint where each cloud spends time during provisioning.

ProvisioningIQ Offering

  • Free tier: Daily benchmark snapshots at provisioningiq.appswireless.com.
  • Pro tier: 90‑day history, p50/p95 trends, per‑region failure analysis, and a daily email digest.

Questions about methodology, failure categorization, or cleanup handling? Drop them in the comments.

0 views
Back to Blog

Related posts

Read more »