High-Performance SQLite Reads in a Go Server
Source: Dev.to
Workload Assumptions
These recommendations assume:
- Reads dominate (writes are rare or offline)
- A single server process owns the database
- Multiple goroutines issue concurrent
SELECTs - Database size fits largely in RAM or OS cache
- Durability across power loss is not critical
If these assumptions change, some trade‑offs below should be reconsidered.
1. Use WAL Mode (Non‑Negotiable)
PRAGMA journal_mode = WAL;
Why this matters
- Readers never block writers
- Readers never block other readers
- Readers do not touch the main database file during reads
- Pages are read sequentially from WAL + DB
For a read‑heavy workload, WAL effectively turns SQLite into a lock‑free reader engine.
2. Set synchronous = NORMAL
PRAGMA synchronous = NORMAL;
In WAL mode this is the sweet spot:
- Transactions remain atomic and consistent
- No extra
fsyncon every commit - Orders of magnitude fewer disk flushes
For read‑heavy systems, latency and throughput matter more than durability across sudden power loss.
3. Aggressively Increase Page Cache
PRAGMA cache_size = -65536; -- ~64 MiB per connection
- Negative values mean kilobytes, not pages
- Cache is per connection
- Larger cache reduces page faults and B‑tree traversal cost
A bigger cache directly translates into fewer disk reads and faster scans.
4. Enable Memory‑Mapped I/O (Huge Win)
PRAGMA mmap_size = 20000000000; -- 20 GiB (or larger than DB)
Memory‑mapped I/O lets the OS page cache do the heavy lifting:
- No
read()syscalls per page - Kernel handles readahead automatically
- Dramatically faster full‑table scans
If your database fits in RAM, this turns SQLite reads into near‑memory‑speed operations.
Rule of thumb: set mmap_size larger than your database file.
5. Keep Temporary Objects in Memory
PRAGMA temp_store = MEMORY;
Avoids disk I/O for:
- Sorts
GROUP BY- Temporary indices
- Subquery materialization
For analytical or scan‑heavy queries, this removes a silent but expensive bottleneck.
6. Use Exclusive Locking (Single‑Process Optimization)
PRAGMA locking_mode = EXCLUSIVE;
Benefits
- Fewer filesystem lock/unlock syscalls
- Slightly lower latency per query
- No shared‑memory lock coordination
Safe only when no other process needs database access.
7. Allow SQLite to Use Worker Threads
PRAGMA threads = 4;
Enables:
- Parallel scans
- Faster large
SELECTs - Better CPU utilization on multi‑core machines
Pairs well with Go’s goroutine concurrency.
8. Index Like Your Throughput Depends on It (It Does)
No amount of PRAGMA tuning can save bad queries.
Guidelines
- Index every column used in
WHERE,JOIN,ORDER BY - Avoid
SELECT *on large tables - Verify plans using
EXPLAIN QUERY PLAN
A single missing index can turn a 1 ms read into a 200 ms full scan.
9. Keep Query Planner Stats Fresh
PRAGMA optimize;
When to run it
- After schema changes
- After bulk data imports
- Periodically on long‑lived connections
Ensures SQLite chooses the fastest possible access path.
10. Read‑Only Mode for Extra Safety & Speed
PRAGMA query_only = ON;
Benefits
- Prevents accidental writes
- Skips some write‑related safety checks
- Safer operationally
Use when the database is truly immutable at runtime.
Recommended Baseline Configuration
PRAGMA journal_mode = WAL;
PRAGMA synchronous = NORMAL;
PRAGMA cache_size = -65536;
PRAGMA mmap_size = 20000000000;
PRAGMA temp_store = MEMORY;
PRAGMA locking_mode = EXCLUSIVE;
PRAGMA threads = 4;
PRAGMA query_only = ON;
Apply these once per connection (except journal_mode, which persists).
Go‑Specific Notes
- Use a connection pool (
database/sql) - Allow many read connections (they are cheap in WAL)
- Reuse prepared statements for hot paths
- Avoid serializing reads unnecessarily
SQLite scales extremely well with concurrent readers when configured correctly.
Final Takeaway
SQLite is not slow. Misconfigured SQLite is slow.
With WAL, memory‑mapped I/O, proper caching, and sensible durability trade‑offs, SQLite can comfortably serve hundreds to thousands of reads per second from a single file, with minimal memory and operational complexity.
If your workload is read‑heavy and your deployment is simple, SQLite remains one of the most efficient databases you can run.

Check out: FreeDevTools
Any feedback or contributors are welcome! It’s online, open‑source, and ready for anyone to use.
⭐ Star it on GitHub: freedevtools