Slapping Secondary Indexes on Random Fields is Silently Killing Your Database.

Published: (March 3, 2026 at 01:17 AM EST)
2 min read
Source: Dev.to

Source: Dev.to

Overview

Why do we add indexes to our SQL fields? To make the search faster, right?

But do you know it has a massive downside? Writes become slower, forcing developers to be strategic about which fields should be labelled as secondary indexes.

Index illustration

Example query

SELECT * FROM book WHERE author = 'C.S. Lewis';

Assume a table book with fields (id, title, author, pub_date, isbn) and 1 million rows, of which only 20 are authored by C.S. Lewis. Without an index the database scans the entire heap (the physical storage of rows), resulting in a full table scan with O(n) complexity—brutally slow.

The Fix

When you add an index on the author column:

CREATE INDEX idx_author ON book(author);

The database builds a separate B‑Tree structure that stores author names in alphabetical order together with pointers to the corresponding rows in the heap. Querying now traverses the B‑Tree in O(log n) time, retrieving the 20 matching pointers and then fetching the rows directly. This is why secondary indexes make reads faster.

Why Writes Get Slower

Inserting or updating a row now involves two steps:

  1. Write the row data to the heap.
  2. Update every secondary B‑Tree that includes the affected columns.

If a table has five secondary indexes, each INSERT or UPDATE must modify the heap plus five B‑Trees on disk. This hidden cost can significantly degrade write performance.

Tips for Secondary Indexes

  • Avoid indexing frequently updated columns (e.g., page visits, view counts) to reduce write overhead.
  • Index only columns used in WHERE, JOIN, or ORDER BY clauses where the performance benefit outweighs the write cost.
0 views
Back to Blog

Related posts

Read more »