The System Design Interview Is Broken. Here's How to Pass It Anyway.

Published: (February 19, 2026 at 02:44 AM EST)
7 min read
Source: Dev.to

Source: Dev.to

1. Generated Columns (Stop Calculating in Application Code)

You have a users table with first_name and last_name. Every query that needs the full name does first_name || ' ' || last_name. Every. Time.

ALTER TABLE users
  ADD COLUMN full_name text
  GENERATED ALWAYS AS (first_name || ' ' || last_name) STORED;

-- Now you can query, index, and search on full_name directly
CREATE INDEX idx_users_full_name
  ON users USING gin(to_tsvector('english', full_name));

Real use case: We had a products table where price_with_tax was calculated in every API response. Moving it to a generated column eliminated thousands of lines of application‑level calculation and made the query 3× faster (no more computing in the SELECT clause).

Other good candidates:

  • age computed from birth_date
  • slug computed from title
  • total computed from price * quantity

2. LISTEN/NOTIFY (Real‑Time Without Polling)

Your background worker polls the database every 5 seconds looking for new jobs. Wasteful.

-- In your job‑insertion trigger:
CREATE OR REPLACE FUNCTION notify_new_job()
RETURNS trigger AS $$
BEGIN
  PERFORM pg_notify(
    'new_job',
    json_build_object(
      'id', NEW.id,
      'type', NEW.job_type,
      'priority', NEW.priority
    )::text
  );
  RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER job_inserted
AFTER INSERT ON jobs
FOR EACH ROW EXECUTE FUNCTION notify_new_job();
// In your worker (using the pg library):
const { Client } = require('pg');
const client = new Client();
await client.connect();

await client.query('LISTEN new_job');

client.on('notification', (msg) => {
  const job = JSON.parse(msg.payload);
  console.log('New job:', job.id, job.type);
  processJob(job);
});

Zero polling. The database pushes events to your workers. We replaced a polling loop doing 12 queries / minute with LISTEN/NOTIFY — database load dropped 40 %.


3. CTEs (WITH Queries) – For Readable Complex Queries

Your 50‑line nested subquery is correct but nobody can read it. Including you, 3 months from now.

-- Before: nested subquery nightmare
SELECT u.name,
       t.total_orders,
       t.total_revenue
FROM   users u
JOIN   (
  SELECT user_id,
         COUNT(*) AS total_orders,
         SUM(
           SELECT SUM(oi.price * oi.quantity)
           FROM   order_items oi
           WHERE  oi.order_id = o.id
         ) AS total_revenue
  FROM   orders o
  WHERE  o.created_at > NOW() - INTERVAL '30 days'
  GROUP BY user_id
) t ON t.user_id = u.id
WHERE  t.total_revenue > 1000;
-- After: CTE version (same result, readable)
WITH recent_orders AS (
  SELECT id, user_id, created_at
  FROM   orders
  WHERE  created_at > NOW() - INTERVAL '30 days'
),
order_totals AS (
  SELECT o.user_id,
         COUNT(DISTINCT o.id)          AS total_orders,
         SUM(oi.price * oi.quantity)   AS total_revenue
  FROM   recent_orders o
  JOIN   order_items oi ON oi.order_id = o.id
  GROUP BY o.user_id
)
SELECT u.name,
       ot.total_orders,
       ot.total_revenue
FROM   users u
JOIN   order_totals ot ON ot.user_id = u.id
WHERE  ot.total_revenue > 1000;

Same query plan. Dramatically more readable. Each CTE block does one thing and has a clear name.


4. JSONB Operations (When You Need Flexibility)

You need to store variable metadata per user. Some users have company, some have department, some have custom fields you can’t predict. Don’t create 50 nullable columns.

ALTER TABLE users
  ADD COLUMN metadata jsonb DEFAULT '{}';

Set a value

UPDATE users
SET    metadata = metadata || '{"company":"Acme","role":"CTO"}'
WHERE  id = 123;

Query specific JSON fields (indexable!)

SELECT *
FROM   users
WHERE  metadata->>'company' = 'Acme';

Partial index on a JSON field

CREATE INDEX idx_users_company
  ON users ((metadata->>'company'))
  WHERE metadata->>'company' IS NOT NULL;

Aggregate JSON data

SELECT metadata->>'company' AS company,
       COUNT(*)               AS cnt
FROM   users
WHERE  metadata->>'company' IS NOT NULL
GROUP BY metadata->>'company';

The rule:

  • Structured, queryable data → regular columns.
  • Variable, semi‑structured data → JSONB.

Don’t use JSONB for everything (you lose type safety and constraints), but don’t avoid it either—it’s PostgreSQL’s super‑power.


5. Window Functions (Analytics Without GROUP BY Pain)

You want to show each order alongside the running total and the user’s average order value. Without window functions this requires subqueries or application code.

SELECT o.id,
       o.user_id,
       o.total,
       o.created_at,
       -- Running total for this user
       SUM(o.total) OVER (
         PARTITION BY o.user_id
         ORDER BY o.created_at
       ) AS running_total,
       -- User's average order value
       AVG(o.total) OVER (PARTITION BY o.user_id) AS avg_order_value,
       -- This order's rank among user's orders (by value)
       RANK() OVER (
         PARTITION BY o.user_id
         ORDER BY o.total DESC
       ) AS value_rank,
       -- Percent of user's total spending
       o.total::numeric /
       SUM(o.total) OVER (PARTITION BY o.user_id) * 100 AS pct_of_total
FROM   orders o
ORDER BY o.user_id, o.created_at;

One query. No subqueries. No application‑level post‑processing. Window functions are the single most under‑used PostgreSQL feature.


6. Exclusion Constraints (Prevent Overlapping Bookings)

You’re building a booking system. No two bookings should overlap for the same room. Your application code checks for conflicts, but there’s a race condition—two requests can check simultaneously, both find no conflict, and both insert.

CREATE TABLE bookings (
  id         serial PRIMARY KEY,
  room_id    int NOT NULL,
  tsrange    tsrange NOT NULL,   -- e.g. '[2024-01-01 10:00,2024-01-01 12:00)'
  EXCLUDE USING gist (
    room_id WITH =,
    tsrange WITH &&
  )
);

The EXCLUDE clause guarantees that for a given room_id no two rows have overlapping time ranges (&&). The database enforces the rule atomically, eliminating the race condition entirely.

CREATE EXTENSION IF NOT EXISTS btree_gist;

ALTER TABLE bookings ADD CONSTRAINT no_overlap 
  EXCLUDE USING gist (
    room_id WITH =,
    tstzrange(start_time, end_time) WITH &&
  );

-- This INSERT succeeds:
INSERT INTO bookings (room_id, start_time, end_time) 
VALUES (1, '2026-03-01 09:00', '2026-03-01 10:00');

-- This INSERT fails (overlaps):
INSERT INTO bookings (room_id, start_time, end_time) 
VALUES (1, '2026-03-01 09:30', '2026-03-01 10:30');
-- ERROR: conflicting key value violates exclusion constraint "no_overlap"

No race condition possible. The database enforces it at the constraint level. No amount of concurrent requests can create an overlap.


7. Row‑Level Security (Multi‑Tenant Without WHERE Clauses)

Every query in your multi‑tenant app has WHERE tenant_id = ?. If one developer forgets, you leak data between tenants.

ALTER TABLE documents ENABLE ROW LEVEL SECURITY;

CREATE POLICY tenant_isolation ON documents
  USING (tenant_id = current_setting('app.current_tenant')::int);

-- In your application middleware:
await db.query("SET app.current_tenant = $1", [tenantId]);

-- Now every query is automatically filtered:
SELECT * FROM documents;
-- PostgreSQL adds WHERE tenant_id = automatically
-- Impossible to accidentally see another tenant's data

This is how enterprise SaaS apps handle multi‑tenancy. The database guarantees isolation, not your application code.


8. Partial Indexes (Index Only What You Query)

Your orders table has 10 M rows. Only 50 K are status = 'pending'. An index on status would cover all 10 M rows for queries that only care about the 50 K pending rows.

-- Instead of this (indexes all 10 M rows):
CREATE INDEX idx_orders_status ON orders(status);

-- Do this (indexes only pending orders):
CREATE INDEX idx_orders_pending ON orders(created_at)
WHERE status = 'pending';
-- Your query:
SELECT * FROM orders WHERE status = 'pending' ORDER BY created_at;
-- Uses the tiny partial index. Lightning fast.

Partial indexes are smaller, faster to update, and faster to scan. Use them when you frequently query a specific subset of your data.


The Common Thread

All of these features move logic from your application into the database. That’s intentional.

Your database is:

  • Transactional — concurrent operations are handled correctly
  • Optimized — decades of work on query planning
  • Persistent — survives application crashes and deployments
  • The single source of truth — all data changes go through it

Every line of application code that duplicates database capabilities is a line that can have bugs, race conditions, and inconsistencies.

Let PostgreSQL do what it’s good at. Your application code should handle business logic, user experience, and integration with external services. Data integrity, calculations on data, and access control? That’s what the database is for.

What’s your favorite PostgreSQL feature that most people don’t use? I’m always looking for new tricks. Drop it in the comments.

0 views
Back to Blog

Related posts

Read more »

What MongoDB taught me about Postgres.

!Cover image for What MongoDB taught me about Postgres.https://media2.dev.to/dynamic/image/width=1000,height=420,fit=cover,gravity=auto,format=auto/https%3A%2F%...