PostgreSQL vs MySQL: A 2026 Production Decision Framework

Published: (February 20, 2026 at 11:14 PM EST)
6 min read
Source: Dev.to

Source: Dev.to

Quick Decision Matrix

SituationRecommended DefaultWhy
Classic OLTP (orders, users, subscriptions) – moderate concurrency, light JSON usageEither PostgreSQL or MySQLBoth work; choose based on team familiarity, managed‑service maturity, ecosystem.
JSON is part of your query API (filtering, containment, dynamic attributes)PostgreSQLjsonb + GIN indexes give flexible, ad‑hoc querying with fewer schema contortions.
Need row‑level security (RLS) for multi‑tenant isolationPostgreSQLFirst‑class RLS primitives.
Very high read scaling with simple key lookups / small range scans; want the most standard replication playbookMySQLProven, straightforward async/semisynchronous replication.
Write load is hotspot‑heavyEither (test the exact pattern)Both can handle it, but you must benchmark.
“SQL + weird queries” (full‑text search, custom operators, partial indexes, advanced constraints)PostgreSQLRich extension ecosystem.

JSON Storage & Indexing

Both databases store JSON, but the production difference lies in indexing flexibility and how often you’ll need to redesign when a new filter is added.

PostgreSQL (flexible, query‑heavy JSON)

CREATE TABLE events (
  id         bigserial PRIMARY KEY,
  tenant_id  bigint NOT NULL,
  created_at timestamptz NOT NULL DEFAULT now(),
  type       text NOT NULL,
  attrs      jsonb NOT NULL
);

-- GIN index for generic JSON queries
CREATE INDEX events_attrs_gin
  ON events USING gin (attrs);

-- Composite index for tenant‑scoped ordering
CREATE INDEX events_tenant_created_at
  ON events (tenant_id, created_at DESC);

What to watch

  • GIN index size & update cost – great for read/query flexibility, but not free for write‑heavy workloads.
  • Operator choice matters (@>, ->, ->>, ?|, etc.). Consistent query patterns are needed to hit the index.

MySQL (JSON as payload, limited indexed paths)

CREATE TABLE events (
  id         bigint unsigned NOT NULL AUTO_INCREMENT,
  tenant_id  bigint NOT NULL,
  created_at timestamp(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6),
  type       varchar(64) NOT NULL,
  attrs      json NOT NULL,
  status     varchar(32)
    GENERATED ALWAYS AS (JSON_UNQUOTE(JSON_EXTRACT(attrs, '$.status'))) STORED,
  PRIMARY KEY (id),
  KEY events_tenant_created_at (tenant_id, created_at),
  KEY events_status (status)
) ENGINE=InnoDB;

Trade‑off
When a new filter such as attrs.customer.segment appears, you must add a generated column + index instead of relying on a generic JSON index.

When to pick which approach

Use‑casePostgreSQLMySQL
JSON is the query surface area (customers filter/sort on it, ad‑hoc queries, weekly schema evolution)
JSON is just payload storage and you can enumerate the 3‑10 paths that matter for indexing

Concurrency & MVCC Gotchas

FeaturePostgreSQLMySQL (InnoDB)
MVCC implementationReaders never block writers; dead tuples accumulate under high churn.MVCC too, but next‑key and gap locks can cause blocking.
Common pitfalls• Long‑running transactions prevent vacuum → table bloat.
• High‑update tables need tuned per‑table autovacuum.
• Updates without a good index can lock many rows.
• Every secondary index adds write overhead.
Online DDLCREATE INDEX CONCURRENTLY avoids write blocks but is slower and can fail.Modern MySQL supports online DDL, yet large index builds still generate load & replica lag.

Concurrency litmus test (skip‑locked job queue)

PostgreSQL (first‑class SKIP LOCKED)

WITH next_job AS (
  SELECT id
  FROM jobs
  WHERE run_at  '{"status":"failed"}'
  ORDER BY created_at DESC
  LIMIT 50
)
SELECT * FROM next_job;

Look for: index scans on (tenant_id, created_at), low shared‑read buffers after warm‑up, no unexpected sequential scans.

MySQL example (generated‑column filter)

EXPLAIN ANALYZE
SELECT id
FROM events
WHERE tenant_id = 42 AND status = 'failed'
ORDER BY created_at DESC
LIMIT 50;

Look for: the chosen key matches your composite index, rows examined ≈ rows returned.

Tip: If every query is tenant‑scoped, make your indexes tenant‑prefixed. This is the most common “we built indexes but latency still sucks” mistake.

Backup, Restore & Point‑In‑Time Recovery (PITR)

AspectPostgreSQLMySQL
PITR mechanismBase backups + WAL archivingFull backups + binary logs
Managed offering recommendationChoose a provider that lets you automate restores and measure restore time on realistic dataset sizes.Same – test restores regularly.
Index creation without blocking writesCREATE INDEX CONCURRENTLYOnline DDL (e.g., ALTER TABLE … ADD INDEX … ALGORITHM=INPLACE, LOCK=NONE)

Operational Health Checklist

  • Autovacuum: monitor dead‑tuple count, bloat indicators, and per‑table settings.
  • Transaction age: watch for long‑running transactions that block vacuum.
  • WAL volume: spikes after deployments or backfills can stress storage.
  • Lock waits: watch for blocked DDL or hot‑table contention.
  • Replica lag: especially after large index builds or bulk loads.

Bottom Line

  1. Score your workload against the two engines.
  2. Prototype the critical queries (including JSON paths) on both.
  3. Measure with EXPLAIN ANALYZE and real‑world load.
  4. Pick the managed service that lets you test restores, tune autovacuum/innodb‑flush, and gives you the operational ergonomics you need.

With this framework you’ll choose the database that fails last, not the one that looks best on paper.

Intentional Guidance for Choosing a Database Engine

Key Metrics to Monitor

  • Replication lag – seconds behind the source.
  • Lock waits / deadlocks – surface missing indexes early.
  • Buffer‑pool hit rate – indicates whether memory sizing and the working set fit.
  • Redo / binlog volume – watch for spikes (e.g., during migrations).
  • Rows examined vs. rows returned – helps spot inefficient queries.

Tip: If you don’t have time to instrument both databases deeply, that itself is a signal. Pick the engine your team already knows how to run under incident pressure.

When to Choose PostgreSQL

  • Your service will accumulate query complexity over time.
  • JSON is a core part of the product’s query surface.
  • You need first‑class multi‑tenant isolation (e.g., Row‑Level Security).
  • You’re willing to operate vacuum and bloat consciously.

When to Choose MySQL

  • Your workload is predictable and query patterns are stable.
  • You prefer the most common operational playbooks for replication and read scaling.

If You’re Still Undecided

  1. Implement the JSON and concurrency patterns you expect to be painful.
  2. Run those patterns under load.
  3. Choose the engine whose failure mode you can live with.
  • PostgreSQL vs. MongoDB for JSON Workloads
  • Node 20 vs. 22 vs. 24: Which LTS Should You Run?
  • Python 3.12 vs. 3.13 vs. 3.14 Comparison
  • Kubernetes Support and EOL Policy

Originally published on ReleaseRun.

0 views
Back to Blog

Related posts

Read more »

What MongoDB taught me about Postgres.

!Cover image for What MongoDB taught me about Postgres.https://media2.dev.to/dynamic/image/width=1000,height=420,fit=cover,gravity=auto,format=auto/https%3A%2F%...