SQLite is All You Need: The 'One-Person Stack' for 2026
Source: Dev.to
The Default Stack is Too Heavy
For the last decade, if you ran rails new, you almost immediately swapped the default database for PostgreSQL, then spun up a Redis instance for Sidekiq, and maybe added Elasticsearch for search.
Before you wrote a single line of business logic, you were managing three separate services.
- For a team of 10? Fine.
- For the “One-Person Framework”? It’s technical debt.
With the release of Rails 8 and modern storage hardware, the rules have changed. It’s time to stop treating SQLite like a toy and start treating it like the production‑grade powerhouse it is.
The Hardware Shift: NVMe Changed Everything
Why did we move away from file‑based databases in the 2000s? Spinning hard drives. Random I/O on a spinning disk was excruciatingly slow, and concurrent writes were a nightmare.
Today, even a $5 VPS runs on NVMe SSDs—insanely fast, with ~3 GB/s read speeds and hundreds of thousands of IOPS. When storage is this fast, the bottleneck isn’t the disk; it’s the network.
- Postgres: App → TCP connection → Socket → Postgres process → Disk → Network → App.
- SQLite: App → Direct system call → Disk.
SQLite eliminates the “network tax” by running inside your application process.
The Rails 8 “Solid” Revolution
Rails 8 has gone all‑in on a “database‑backed” architecture. The Rails team realized that managing Redis just for cache and jobs was a hurdle for solo developers, so they introduced the “Solid” trilogy, which lets SQLite handle duties traditionally assigned to Redis.
Solid Queue
Background jobs are stored in a standard SQLite table—no Redis required.
Solid Cache
HTML fragments and data are cached in a SQLite table. Because reads are local (no network latency), this can be faster than a networked Redis instance.
Solid Cable
WebSocket pub/sub is handled via the database.
Your infrastructure diagram collapses from a spiderweb of services into a single server with a single file.
But… “SQLite Can’t Scale,” Right?
The most common objection is that SQLite “doesn’t handle concurrency.” False. Enabling WAL (Write‑Ahead Logging) mode makes SQLite far more capable of handling concurrent readers and writers.
# config/database.yml
production:
adapter: sqlite3
database: storage/production.sqlite3
timeout: 5000
pool: 5
# Enable WAL mode for better concurrency
pragmas:
journal_mode: wal
synchronous: normal
Unless you’re building the next Twitter, you’re unlikely to hit the write‑lock limit of modern SQLite. Hundreds of requests per second on a modest VPS are feasible. If you do hit that limit, congratulations—you have a successful business that can afford to migrate to Postgres. Don’t optimize for a problem you don’t have yet.
The “N+1” Superpower
We spend hours optimizing N+1 queries in Rails to avoid “chatty” network calls to Postgres.
- Request 1: 10 ms
- Request 2: 10 ms
- …
In SQLite, an N+1 query is just a function call—startlingly fast. Looping through 100 records and querying associated data can finish in ~2 ms total. Moving to SQLite lets you write simpler code because you don’t have to fear the database round‑trip as much.
What About Backups? (Crucial!)
“If the server dies, I lose my file.”
This is solved by Litestream (or LiteFS). Litestream runs as a sidecar process, hooks into SQLite’s WAL updates, and streams them in real time to Amazon S3 (or any object storage). If your server catches fire, you can restore the database to the exact second it went down by pulling from S3—arguably easier than a Postgres pg_dump cron job.
Summary: The One‑Person Stack
The goal of the solo developer is velocity.
- No Docker containers to orchestrate.
- No connection‑pool errors.
- No Redis version mismatches.
- Just
rails server.
Complexity is the mind‑killer. Simplify your stack, and you’ll have more brainpower left over to actually build your product.
Are you brave enough to run SQLite in production? Let me know your thoughts below!