How I built a high-performance Social API with Bun & ElysiaJS on a $5 VPS (handling 3.6k reqs/min)
Source: Dev.to
The Goal
I wanted to build a Micro‑Social API—a backend service capable of handling Twitter‑like feeds, follows, and likes—without breaking the bank.
- Budget: $5 – $20 / month
- Performance: Sub‑300 ms latency
- Scale: Must handle concurrent load (stress testing)
Most tutorials show you Hello World. This post shows what happens when you actually hit Hello World with 25 concurrent users on a cheap VPS (spoiler: it crashes). Here’s how I fixed it.
The Stack 🛠️
- Runtime: Bun
- Framework: ElysiaJS (fastest Bun framework)
- Database: PostgreSQL (via Dokploy)
- ORM: Drizzle (lightweight & type‑safe)
- Hosting: VPS with Dokploy (Docker Compose)
The “Oh Sh*t” Moment 🚨
I deployed my first version and it worked fine for me. Then I ran a load test using k6 to simulate 25 virtual users browsing various feeds.
k6 run tests/stress-test.js
Result
✗ http_req_failed................: 86.44%
✗ status is 429..................: 86.44%
The server wasn’t crashing, but it was rejecting almost everyone.
Diagnosis
I initially blamed Traefik (the reverse proxy). Digging into the code, I found the culprit was me.
// src/index.ts
// OLD CONFIGURATION
.use(rateLimit({
duration: 60_000,
max: 100 // 💀 100 requests per minute... GLOBAL per IP?
}))
Since my stress test (and likely any NATed corporate office) sent all requests from a single IP, I was essentially DDOS‑ing myself.
The Fixes 🔧
1. Tuning the Rate Limiter
I bumped the limit to 2,500 req/min. This prevents abuse while allowing heavy legitimate traffic (or load balancers).
// src/index.ts
.use(rateLimit({
duration: 60_000,
max: 2500 // Much better for standard reliable APIs
}))
2. Database Connection Pooling
The default Postgres pool size is often small (e.g., 10 or 20). My VPS has 4 GB RAM; PostgreSQL needs RAM for connections, but not that much. I increased the pool to 80 connections.
// src/db/index.ts
const client = postgres(process.env.DATABASE_URL, {
max: 80
});
3. Horizontal Scaling with Docker
Node/Bun is single‑threaded. A single container uses one CPU core effectively. My VPS has 2 vCPUs, so I added a replicas instruction to docker-compose.dokploy.yml:
api:
build: .
restart: always
deploy:
replicas: 2 # One for each core!
Traefik automatically load‑balances between the two containers, instantly doubling throughput capacity.
The Final Result 🟢
Running k6 again:
k6 run tests/stress-test.js
Outcome
✓ checks_succeeded...: 100.00%
✓ http_req_duration..: p(95)=200.45ms
✓ http_req_failed....: 0.00% (excluding auth checks)
0 errors, ~200 ms latency on a cheap VPS.
Takeaway
You don’t need Kubernetes for a side project. You just need to understand where your bottlenecks are:
- Application Layer: Check your rate limits.
- Database Layer: Check your connection pool.
- Hardware: Use all your cores (replicas).
If you want to try the API, it’s published on RapidAPI as Micro‑Social API:
https://rapidapi.com/ismamed4/api/micro-social
Happy coding! 🚀