Shipping JSON over the wire is professional negligence

Published: (December 19, 2025 at 01:59 AM EST)
3 min read
Source: Dev.to

Source: Dev.to

I can already see that this article is gonna piss off some folks. Fine by me.

The problem with JSON

Human‑readable data in transit is a decades‑old mistake we keep defending out of habit. Not because it’s correct. Not because it’s efficient. Because it’s familiar. JSON is the most familiar of them all, and that’s exactly the problem: we treat familiarity like a guarantee.

JSON has no type safety. It has no schema, no enforced contract, no canonical representation. It is just a string of bytes that both sides politely agree to interpret the same way… until one side doesn’t.

You can document that age is an integer. Then a client sends "age": "27" because JavaScript, and no one notices because you silently coerce it. Until you don’t. Or the server sends null where the client expected a number. Or the server “helpfully” changes snake_case to camelCase during a refactor. Or someone adds a field with the same name but a different meaning because it was “unused anyway”. Or the backend returns "status": "ok" on a failure because the error path is duct‑taped and the API gateway is lying for it. All of this is legal JSON. That’s the point. JSON will happily serialize your mistakes. JSON will not care. “It’s meant to be simple”.

Using no schema and relying on “flexibility” means that your contracts are always implicit, social, and enforced by good vibes. You can’t expect to build a serious system on crossed fingers, holding hands, and singing praises to the deity of choice.

“This‑will‑just‑validate‑it” is not an argument for JSON. It’s an admission that you need a second system, and you’ll end up implementing that validation in every client, in every service, in every language—repeatedly, forever.

I’m not attacking developers of validation libraries; we need strong validation logic regardless of the protocol. The problem is using JSON itself as the contract.

Why schema‑driven formats help

With schema‑driven formats you still can mess up, but you have guardrails that exist outside of tribal memory:

  • A schema defines the contract.
  • A canonical encoding removes ambiguity.
  • Compatibility rules are mechanical, not interpretive.

You can’t “accidentally” ship a string where a number belongs without someone, somewhere, screaming early.

This is, of course, unless you use a language like JavaScript where types don’t exist.

Bytes matter too. Text is expensive in the boring ways: field names repeated, quotes repeated, escapes repeated, parsers doing work they shouldn’t have to do, allocations that multiply under load. For what? So that someone can reach for the comfort blanket of debugging. If the best defense of JSON is “I like reading it with my eyes”, you’re optimizing for the wrong thing. Debugging should be done with tools that understand the contract, not with eyeballs scanning a blob and praying you didn’t miss a comma. We have better tooling for contract‑driven protocols than “copy as cURL, pipe to jq, and squint”.

Recommendation

Make a schema format—such as Protocol Buffers, Avro, or another schema‑driven system—the canonical representation between machines. Use JSON only where it is genuinely appropriate.

If you’re still offended by that: good. You should be. Not at me, though! Be offended at the fact that we collectively decided “a bag of untyped strings with vibes” is an acceptable foundation for serious systems. Keep JSON as your duct tape if you want, but don’t call it architecture. And definitely don’t call it “type safe” with a straight face, pretending that your cute GraphQL schemas are sufficient.

I’m not going to tell you to drop everything and get familiar with Protocol Buffers and gRPC, but I’m also not going to tell you not to.

Back to Blog

Related posts

Read more »