Frontend Engineers Should Care More About Infrastructure
Source: Dev.to

The Migration and the Missing Header
One 304 Not Modified is easy to ignore. A page full of them is latency.
We migrated CDN providers that quarter. The new provider was meaningfully cheaper, and its edge coverage in Southeast Asia looked comparable. The migration itself went smoothly: traffic moved over, error rates stayed normal, and the infra team signed it off.
One thing had changed underneath us: the Cache‑Control headers our old provider had been injecting at the edge did not carry over.
Before that migration, I did not think about those headers much. Not because they were unimportant, but because they had already been decided below the application layer. The asset pipeline emitted files, the CDN served them, the browser cached them, and the page worked.
The new provider was still serving our static images correctly. ETag was present. Repeat loads returned 304 Not Modified. Nothing obvious was being re‑downloaded. At a glance, the network tab looked fine.
Nothing was broken. That was what made it hard to spot.
Why It Matters
The browser was doing the responsible thing. The server was doing the responsible thing. DevTools showed small transfer sizes, and the status code looked like evidence that caching worked. If you were scanning for wasted bytes, you would move on.
The LCP regression showed up two weeks later in RUM data. The moment it clicked was a Sentry trace from an icon‑heavy page. It was not one suspicious request. It was a page full of tiny validations, all individually harmless‑looking, sitting in the same trace. Together, they made the page feel like it was checking its pockets before every step.
Without Cache‑Control or Expires, the browser falls back to heuristic caching. Usually that means a short, unpredictable freshness window based on the gap between Last‑Modified and the current date.
So repeat visits kept sending conditional GET requests with If‑None‑Match to ask whether the image had changed. The server replied 304. The image body was not downloaded again, but the browser still paid for a round‑trip before rendering the Largest Contentful Paint (LCP) element.
On a fast connection this was invisible, but on a mid‑range Android device on 4G it compounded. LCP does not care that the requests ended in 304; it cares that rendering waited.
The Fix: Explicit Freshness
For static product and UI images served at stable URLs, we should have been sending something like:
Cache-Control: max-age=604800, stale-while-revalidate=86400Seven days of freshness, with background revalidation after that. Repeat visits skip the validation request entirely, so the browser does not need to ask the edge before it can render.
The important constraint is stable URLs. If the same URL can point to a different image tomorrow, a long max‑age is a foot‑gun. But for versioned assets, product images with controlled invalidation, and UI images that do not change under the same path, freshness is the point.
Checklist for Frontend Engineers
- Look for repeated validation spans in performance traces (e.g., Sentry, Chrome DevTools).
- Inspect response headers on the actual LCP resource.
- Verify that those spans do not sit before the LCP mark.
- Ensure that static assets have an explicit
Cache‑Controlheader with an appropriatemax‑age(and optionallystale‑while‑revalidate). - Confirm that URLs are stable or versioned before applying long‑term caching.
Takeaways
The infra team did nothing wrong. The migration was correct. The old provider had simply been doing us a quiet favor, and we did not know enough to notice when it stopped. That was the real bug.
If I had paired that icon‑heavy Sentry trace with the missing Cache‑Control header sooner, I would have found this in an afternoon instead of two weeks into a KPI cycle.
After restoring explicit freshness on those media responses, repeat visits no longer had to validate the same image URLs before rendering. Sentry will tell you LCP is 4.2s. Chrome DevTools will show you the waterfall. But if the response headers read like a foreign language, you can spend weeks looking for a frontend bug that is not in the codebase.
The browser is the last mile. Infrastructure is the road under it. Frontend engineers should know what it is made of.