Slapping Secondary Indexes on Random Fields is Silently Killing Your Database.
Source: Dev.to
Overview
Why do we add indexes to our SQL fields? To make the search faster, right?
But do you know it has a massive downside? Writes become slower, forcing developers to be strategic about which fields should be labelled as secondary indexes.

Example query
SELECT * FROM book WHERE author = 'C.S. Lewis';
Assume a table book with fields (id, title, author, pub_date, isbn) and 1 million rows, of which only 20 are authored by C.S. Lewis. Without an index the database scans the entire heap (the physical storage of rows), resulting in a full table scan with O(n) complexity—brutally slow.
The Fix
When you add an index on the author column:
CREATE INDEX idx_author ON book(author);
The database builds a separate B‑Tree structure that stores author names in alphabetical order together with pointers to the corresponding rows in the heap. Querying now traverses the B‑Tree in O(log n) time, retrieving the 20 matching pointers and then fetching the rows directly. This is why secondary indexes make reads faster.
Why Writes Get Slower
Inserting or updating a row now involves two steps:
- Write the row data to the heap.
- Update every secondary B‑Tree that includes the affected columns.
If a table has five secondary indexes, each INSERT or UPDATE must modify the heap plus five B‑Trees on disk. This hidden cost can significantly degrade write performance.
Tips for Secondary Indexes
- Avoid indexing frequently updated columns (e.g., page visits, view counts) to reduce write overhead.
- Index only columns used in
WHERE,JOIN, orORDER BYclauses where the performance benefit outweighs the write cost.