Benchmarking Socket.IO Servers
Source: Dev.to
The Setup
The Contenders
| Label | Runtime | WebSocket server |
|---|---|---|
| node‑ws | Node.js 24.11.1 | ws |
| node‑uws | Node.js 24.11.1 | uWebSockets.js v20.52.0 |
| bun‑ws | Bun 1.3.6 | ws |
| bun‑native | Bun 1.3.6 | @socket.io/bun-engine 0.1.0 |
ws is the default. It’s pure JS and reliable, but not fast (spoiler: it isn’t).
The test server is a slightly altered version of the backend of my recent project, Versus Type – a real‑time PvP typing game. I stripped out auth, rate‑limits, and DB calls.
For the load generator I use Artillery with the artillery-engine-socketio-v3 plugin to simulate thousands of concurrent clients connecting via WebSocket and playing the game.
Hardware
| Role | Instance | Specs |
|---|---|---|
| Server | AWS Standard B2als v2 | 2 vCPU, 4 GB RAM, Ubuntu 22.04 LTS |
| Attacker | AWS Standard B4als v2 | 4 vCPU, 8 GB RAM, Ubuntu 22.04 LTS |
Attack Flow
- Artillery spawns 4 virtual users per second.
- Each user calls
/api/pvp/matchmake. - The server runs a matchmaking algorithm and returns a room ID (max 6 players per room).
- Users connect via WebSocket, join the room, and receive the game state (the passage).
- Server broadcasts a countdown; players wait until it reaches 0.
- Users emit a keystroke event at 60 WPM (≈ 1 event / 200 ms).
- For every keystroke the server validates the input, updates state, and broadcasts the new state to everyone in the room.
- Users send a ping event every second for latency tracking.
The passage is long enough to ensure no game ends before the benchmark finishes.
(Real server also broadcasts system messages and WPM updates each second.)
GitHub repo: (includes server, client, and result data).
The Results
Overall Winner
Node + uWS (blue line) outperformed every other configuration in all metrics except memory usage, where Bun took the lead.
0 – 800 Clients
- The Bun servers have significantly lower event‑loop lag (~0 ms) than the Node servers.
node‑uwsis the most stable.- The
wsservers (both Bun and Node) see latency p95 creep up to 15‑20 ms. - The other two configurations stay rock‑solid at ≈ 5 ms.
800 – 1 500 Clients
node‑wsexplodes – latency spikes around 1 k clients.bun‑wsandbun‑nativefollow, but later.- Event‑loop lag follows the same pattern.
- CPU hits 100 % for
node‑wsat ~1 k clients,bun‑wsat ~1.2 k,bun‑nativeat ~1.3 k. node‑uwsstays at ≈ 80 % CPU even at 1.5 k clients, rising at a similar rate to the others.- Throughput becomes unstable for all except
node‑uws. - Memory:
node‑uwsshows a strange dip before climbing back; the Bun servers use less memory overall – Bun’s memory management is impressive.
Bottom line: node‑ws cannot handle the load, while node‑uws remains chill with flat latency and low event‑loop lag.
1 500 – 2 100 Clients
node‑ws,bun‑ws, andbun‑nativeare effectively dead – latency through the roof.node‑uwsstill runs at a constant ≈ 80 % CPU with low latency.- Note:
node‑ws’s p95 latency appears flat for a while (lower thanbun‑native) because metrics stopped being recorded; Artillery’s pushgateway shows the last recorded value until a new one arrives.
2 100 – 3 300 Clients
node‑uwscontinues to hold steady – CPU stays around 80 %, latency remains low, and the server stays responsive.- All other configurations are overwhelmed – CPU at 100 %, massive latency spikes, and frequent crashes.
Takeaways
| Configuration | Strengths | Weaknesses |
|---|---|---|
| node‑uws | Best overall performance, stable latency, low event‑loop lag, handles > 3 k clients with ~80 % CPU. | Slightly higher memory usage than Bun. |
| bun‑native | Excellent memory footprint, low event‑loop lag at low client counts. | Cannot sustain high client counts; latency rises sharply after ~1.3 k clients. |
| bun‑ws | Low event‑loop lag early on, good memory usage. | Fails earlier than bun‑native under load. |
| node‑ws | Simple, default option. | Crashes around 1 k clients; high CPU and latency. |
Bottom line: If you need raw performance for a high‑traffic real‑time app, Node + uWebSockets.js is the clear winner. If memory is a primary concern and you’re comfortable with Bun, the Bun native engine is a solid secondary choice – just keep an eye on its scalability limits.

`node-uws` is the only one still standing. It's at ~90‑100 % CPU now.
Throughput starts to become less stable, and latency slowly creeps up. It goes dead after ~3 250 clients.
We can say it could handle a solid 3 000‑3 100 concurrent clients just fine—more than double the next best (`bun-native`).
Full Graph
CSV files are available here on GitHub.
Bun, What Happened?
It’s a surprise to see bun-native get absolutely destroyed here, because the Bun WebSocket server uses uWebSockets under the hood.
I don’t exactly know the reason why, but it might be because @socket.io/bun-engine is still very new (v0.1.0) and may have inefficiencies and abstraction layers that add overhead.




