GraphQL: The enterprise honeymoon is over
Source: Hacker News

Introduction
I’ve used GraphQL, specifically Apollo Client and Server, for a couple of years in a real enterprise‑grade application. Not a toy app. Not a greenfield startup. A proper production setup with multiple teams, BFFs, downstream services, observability requirements, and real users.
After all that time I’ve come to a pretty boring conclusion:
GraphQL solves a real problem, but that problem is far more niche than people admit. In most enterprise setups it’s already solved elsewhere, and when you add up the trade‑offs, GraphQL often ends up being a net negative.
This isn’t a “GraphQL is bad” post. It’s a “GraphQL after the honeymoon” post.
What GraphQL is supposed to solve
The main problem GraphQL tries to solve is over‑fetching. The idea is simple and appealing:
- the client asks for exactly the fields it needs
- no more, no less
- no wasted bytes
- no backend changes for every new UI requirement
On paper, that’s great. In practice, things are messier.
Overfetching is already solved by BFFs
Most enterprise frontend architectures already have a BFF (Backend for Frontend). That BFF exists specifically to:
- shape data for the UI
- aggregate multiple downstream calls
- hide backend complexity
- return exactly what the UI needs
If you’re using REST behind a BFF, over‑fetching is already solvable. The BFF can scope down responses and return only what the UI cares about.
Yes, GraphQL can also do this. But most downstream services are still REST, so your GraphQL layer still has to over‑fetch from those APIs and then reshape the response. You didn’t eliminate over‑fetching; you just moved it down a layer. That alone significantly diminishes GraphQL’s main selling point.
There is a case where GraphQL wins: if multiple pages hit the same endpoint but need slightly different fields, GraphQL lets you scope those differences per query. But you’re usually saving a handful of fields per request in exchange for:
- more setup
- more abstraction
- more indirection
- more code to maintain
That’s an expensive trade for a few extra kilobytes.
Implementation time is much higher than REST
GraphQL takes significantly longer to implement than a REST BFF.
REST workflow:
- call downstream services
- adapt the response
- return what the UI needs
GraphQL workflow:
- define a schema
- define types
- define resolvers
- define data sources
- write adapter functions anyway
- keep schema, resolvers, and clients in sync
GraphQL optimizes consumption at the cost of production speed. In an enterprise environment, production speed matters more than theoretical elegance.
Observability is worse by default
GraphQL uses a quirky status‑code convention:
400if the query can’t be parsed200with anerrorsarray if something failed during execution200if it succeeded or partially succeeded500if the server is unreachable
From an observability standpoint, this is painful. With REST:
2XXmeans success4XXmeans client error5XXmeans server error
If you filter dashboards by 2XX, you know those requests succeeded. With GraphQL, a 200 can still mean partial or full failure. Apollo lets you customize this behavior, but that adds extra configuration, conventions, and mental overhead—taxes you pay on call, not in blog posts.
Caching sounds amazing until you live with it
Apollo’s normalized caching is impressive in theory. In practice, it’s fragile. If two queries differ by only one field, Apollo treats them as separate queries, forcing you to manually wire:
- existing fields from cache
- only the differing field fetched
Result:
- you still have a round‑trip
- added code complexity
- debugging cache issues becomes its own problem
Meanwhile, REST happily over‑fetches a few extra fields, caches the whole response, and moves on. Extra kilobytes are cheap; complexity isn’t.
The ID requirement is a leaky abstraction
Apollo expects every object to have an id or _id field by default, or you must configure a custom identifier. Many enterprise APIs:
- don’t return IDs
- lack natural unique keys
- aren’t modeled as globally identifiable entities
Thus the BFF must generate IDs locally just to satisfy the GraphQL client, adding:
- more logic
- more fields
- an extra fetched field (ironic, given the original goal to reduce over‑fetching)
REST clients impose no such constraint.
File uploads and downloads are awkward
GraphQL is not a good fit for binary data. In practice you end up:
- returning a download URL, then using REST to fetch the file, or
- embedding large payloads (e.g., PDFs) directly in GraphQL responses, which bloats responses and hurts performance
This breaks the “single API” story.
Onboarding is slower
Most frontend and full‑stack developers are far more experienced with REST than GraphQL. Introducing GraphQL means teaching:
- schemas
- resolvers
- query composition
- caching rules
- error semantics
That learning curve creates friction, especially when teams need to move fast. REST is boring, but boring scales extremely well.
Error handling is harder than it needs to be
GraphQL error responses are… weird. You have:
- nullable vs non‑nullable fields
- partial data +
errorsarrays - extensions with custom status codes
- the need to trace which resolver failed and why
All of this adds indirection. Compare that to a simple REST setup where:
- input validation fails →
400 - backend fails →
500 - validation library (e.g., Zod) error → done
Simple errors are easier to reason about than elegant ones.
The net result
GraphQL absolutely has valid use cases. But in most enterprise environments:
- you already have BFFs
- downstream services are REST
- over‑fetching is not your biggest problem
- observability, reliability, and speed matter more
When you add everything up, GraphQL often ends up solving a narrow problem while introducing a broader set of new ones. That’s why, after using it in production for years, I’d say:
GraphQL isn’t bad. It’s just niche. And you probably don’t need it, especially if your architecture already solved the problem it was designed for.