Cache Update: When Pages Start to Change in SQLite

Published: (February 2, 2026 at 02:21 PM EST)
4 min read
Source: Dev.to

Source: Dev.to

Hello, I’m Maneshwar. I’m working on FreeDevTools online, building “one place for all dev tools, cheat codes, and TLDRs” — a free, open‑source hub where developers can quickly find and use tools without the hassle of searching all over the internet.

Pager Write Process

Up to this point, cached pages have been treated as read‑only snapshots. They’re fetched, pinned, used, and released. Things get more interesting the moment a transaction wants to change something.

SQLite draws a very sharp line here. A client is allowed to modify a page only after it explicitly tells the pager what it is about to do. That signal is the call to sqlite3PagerWrite. This call is not about writing to disk; it’s about declaring intent.

  • Once the client calls sqlite3PagerWrite on a page, the pager prepares the safety net.
  • Only after that does SQLite allow in‑place modification of the page image in memory. From that point on, the client may update the page as many times as it wants.

The first time sqlite3PagerWrite is called for a page during a transaction, the pager immediately writes the original contents of that page into the rollback journal. This is the before‑image SQLite will need if something goes wrong.

At the same time:

  • The pager sets PgHdr.needSync = true – the page is marked as participating in the transaction.
  • That needSync flag is crucial. It encodes a hard rule: SQLite will not write a modified page back to the database file until its journal record is safely flushed to disk. This is classic write‑ahead logging discipline, enforced at the page level.
  • Only when the journal write is synced does the pager clear needSync, making the page eligible for database‑file writes later.

Every call to sqlite3PagerWrite, whether it’s the first or the tenth, sets PgHdr.dirty = true. That flag stays set until the pager actually writes the page back to the database file.

Delayed Write (Write‑Back) Policy

The pager does not know when the client modifies a page — only that it might. Because of that, SQLite does not attempt to propagate changes immediately. Instead, it follows a delayed write (write‑back) policy. Pages are modified directly in cache; disk writes happen later:

  • During commit
  • During cache flush
  • When a dirty page must be evicted

This keeps page updates cheap and avoids excessive I/O during normal execution.

Consequences

  • Old page versions must be saved somewhere → journals
  • Memory usage can grow during long or write‑heavy transactions
  • Dirty pages may accumulate faster than they can be flushed

SQLite accepts this cost because writing on every modification would be far worse for performance.

Cache Fetch Policy

When memory pressure grows too high, the cache manager steps in and starts recycling pages. That’s where the replacement policy becomes critical.

SQLite follows a strict fetch‑on‑demand policy:

  • A page is fetched only when it is explicitly requested
  • Exactly one page at a time
  • No prediction, no read‑ahead, no speculation

Yes, this means the client stalls while the page is read from disk. SQLite accepts that cost to keep the pager:

  • Small
  • Predictable
  • Easy to reason about

This simplicity is intentional. Fewer moving parts means fewer correctness bugs, especially in a system that must survive crashes.

Cache Management Basics

A cache is always smaller than the database it serves, so reuse is inevitable. At any given moment:

  • Some pages exist only on disk (the master copy)
  • Some pages exist in memory (the cache copy)
  • Some cache copies are clean
  • Some are dirty
  • Some are pinned
  • Some are free to evict

Three basic realities shape every decision:

  1. Every cached page has a master copy on disk.
  2. Cache misses require loading from that master.
  3. When the cache is full, something must go.

As long as free slots exist, life is easy—the cache just grows. The hard work begins when the cache is full. Imagine needing 26 different pages but only having space for 5—that’s the reality SQLite operates in.

A good cache:

  • Keeps frequently reused pages
  • Evicts pages that are unlikely to be touched again soon

A bad cache:

  • Throws away hot pages
  • Keeps cold, irrelevant ones
  • Thrashes constantly

The effectiveness of the cache is measured by its hit rate. High hit rate means less disk I/O; low hit rate means everything slows down. This makes cache replacement one of the most performance‑critical parts of the pager.

Looking Ahead: Cache Replacement

In the next post we’ll dive into:

  • Cache replacement strategies
  • The classic LRU approach
  • How SQLite adapts it to fit its pager model
Back to Blog

Related posts

Read more »