Why Azure Front Door Made My Next.js App Take 90 Seconds to Load (and How I Fixed It)

Published: (February 21, 2026 at 11:17 AM EST)
5 min read
Source: Dev.to

Source: Dev.to

Problem

We shipped a Next.js app on Azure Container Apps behind Azure Front Door Premium with Private Link.
Everything was a standard setup—nothing exotic.

After deployment every page started taking ~90 seconds to load.

  • The HTML document loaded fine.
  • API routes were fast.
  • Every JavaScript chunk hung for exactly 90 seconds before the browser threw ERR_HTTP2_PROTOCOL_ERROR (the underlying HTTP/2 stream died with INTERNAL_ERROR (err 2)).

Note: The failure affected all chunks, not just a subset.

Environment

ComponentDetails
AppNext.js 16 on Azure Container Apps (internal environment, Private Link)
CDN/WAFAzure Front Door Premium
Routes• API – /api/*
• Static assets – /_next/static/*
• Catch‑all – /*
Next.js configcompress: true (default)
Health probes100 % healthy; small responses fine
SSR page (/sign‑in, 78 KB)Loaded in ~300 ms via the catch‑all route – static asset delivery was the problem

Reproducing the Issue

Same JS chunk – with and without Accept‑Encoding

# Without gzip — 303 ms, full response
curl -s -w "Total: %{time_total}s\n" -o /dev/null \
  "https://my-fd-endpoint.azurefd.net/_next/static/chunks/app.js"
#=> Total: 0.303414s
# With gzip — 90 seconds, incomplete, HTTP/2 stream error
curl -s -w "Total: %{time_total}s\n" -o /dev/null \
  -H "Accept-Encoding: gzip" \
  "https://my-fd-endpoint.azurefd.net/_next/static/chunks/app.js"
#=> Total: 90.245256s

Both requests hit the same file, same route, same origin.
The only difference: the client asks for gzip compression.

Response headers (gzip request)

HTTP/2 200
content-type: application/javascript; charset=UTF-8
content-length: 112049
cache-control: public, max-age=31536000, immutable
content-encoding: gzip
vary: Accept-Encoding
x-cache: TCP_MISS
x-azure-ref: 20260220T195031Z-157f99bd8b842q87hC1CPH...
  • content-length: 112049 with content-encoding: gzip.
  • curl reports only 8,527 bytes received before the HTTP/2 stream dies with INTERNAL_ERROR (err 2).

SSR page (/sign‑in) – works fine

HTTP/2 200
content-type: text/html; charset=utf-8
vary: rsc, next-router-state-tree, next-router-prefetch, ...
cache-control: private, no-cache, no-store, max-age=0, must-revalidate
x-cache: CONFIG_NOCACHE
  • No content-encoding.
  • No content-length (chunked transfer).

The origin does not gzip the SSR response, even when the client requests it. That’s why the SSR page works while the static chunks fail.

What We Tried

AttemptChangeResult
1Disabled Front Door compression (compression_enabled = false)Still broken – proved the issue isn’t double‑compression.
2Removed the cache block entirelyStill broken – caching isn’t the cause.
3Switched forwarding protocol to HttpOnly (TLS over Private Link)Still broken – TLS overhead isn’t the issue.

Conclusion: Front Door cannot properly relay an already‑gzip‑compressed response from the origin; it stalls and kills the connection after exactly 90 seconds, which matches Front Door’s non‑configurable HTTP keep‑alive idle timeout.

This behavior is not specific to Private Link. Microsoft has a Health Advisory describing the same failure after tightening HTTP compliance across PoPs. Their Q&A threads (one, two) point to the same fix: disable origin compression.

Fix

  1. Disable compression at the origin.
  2. Let Front Door compress at the edge.

Example: Next.js

// next.config.js
const nextConfig = {
  compress: false, // Front Door will compress at the edge
  // …
};

module.exports = nextConfig;

Example: Front Door route configuration (Terraform)

resource "azurerm_cdn_frontdoor_route" "static" {
  # …
  cache {
    query_string_caching_behavior = "UseQueryString"
    compression_enabled           = true
    content_types_to_compress = [
      "text/html",
      "text/css",
      "text/javascript",
      "application/javascript",
      "application/x-javascript",
      "application/json",
      "image/svg+xml",
      "font/woff2",
    ]
  }
}

Other Platforms

PlatformHow to disable compression
ExpressRemove/disable the compression middleware.
Azure App ServiceSet WEBSITES_DISABLE_CONTENT_COMPRESSION=1.
Nginx (behind AFD)Turn off gzip (gzip off;).

Quick Diagnostic

Run these two curl commands against the same asset:

# 1️⃣ Without compression
curl -s -w "TTFB: %{time_starttransfer}s\nTotal: %{time_total}s\n" \
  -o /dev/null "https://your-fd-endpoint.azurefd.net/your-asset.js"

# 2️⃣ With compression
curl -s -w "TTFB: %{time_starttransfer}s\nTotal: %{time_total}s\n" \
  -o /dev/null -H "Accept-Encoding: gzip" \
  "https://your-fd-endpoint.azurefd.net/your-asset.js"

If the first finishes in milliseconds and the second hangs for ~90 seconds, you’ve hit this issue.
Save the x-azure-ref header from the broken response – you’ll need it if you open a support ticket with Microsoft.

Expected Healthy Response (after fix)

HTTP/2 200
content-type: application/javascript; charset=UTF-8
content-length: 41182
cache-control: public, max-age=31536000, immutable
content-encoding: gzip
vary: Accept-Encoding
x-cache: TCP_HIT
x-azure-ref: …

Now the origin sends uncompressed data, Front Door compresses once, and the asset is delivered instantly.

immutable
x-cache: TCP_HIT

No content‑encoding from the origin. Front Door served it from cache in milliseconds. If you see x-cache: TCP_HIT and no stall, you’re good.

0 views
Back to Blog

Related posts

Read more »