Benchmarking Socket.IO Servers

Published: (January 19, 2026 at 10:59 AM EST)
4 min read
Source: Dev.to

Source: Dev.to

The Setup

The Contenders

LabelRuntimeWebSocket server
node‑wsNode.js 24.11.1ws
node‑uwsNode.js 24.11.1uWebSockets.js v20.52.0
bun‑wsBun 1.3.6ws
bun‑nativeBun 1.3.6@socket.io/bun-engine 0.1.0

ws is the default. It’s pure JS and reliable, but not fast (spoiler: it isn’t).

The test server is a slightly altered version of the backend of my recent project, Versus Type – a real‑time PvP typing game. I stripped out auth, rate‑limits, and DB calls.

For the load generator I use Artillery with the artillery-engine-socketio-v3 plugin to simulate thousands of concurrent clients connecting via WebSocket and playing the game.

Hardware

RoleInstanceSpecs
ServerAWS Standard B2als v22 vCPU, 4 GB RAM, Ubuntu 22.04 LTS
AttackerAWS Standard B4als v24 vCPU, 8 GB RAM, Ubuntu 22.04 LTS

Attack Flow

  1. Artillery spawns 4 virtual users per second.
  2. Each user calls /api/pvp/matchmake.
  3. The server runs a matchmaking algorithm and returns a room ID (max 6 players per room).
  4. Users connect via WebSocket, join the room, and receive the game state (the passage).
  5. Server broadcasts a countdown; players wait until it reaches 0.
  6. Users emit a keystroke event at 60 WPM (≈ 1 event / 200 ms).
  7. For every keystroke the server validates the input, updates state, and broadcasts the new state to everyone in the room.
  8. Users send a ping event every second for latency tracking.

The passage is long enough to ensure no game ends before the benchmark finishes.
(Real server also broadcasts system messages and WPM updates each second.)

GitHub repo: (includes server, client, and result data).

The Results

Overall Winner

Node + uWS (blue line) outperformed every other configuration in all metrics except memory usage, where Bun took the lead.

0 – 800 Clients

0‑800 Graph

  • The Bun servers have significantly lower event‑loop lag (~0 ms) than the Node servers.
  • node‑uws is the most stable.
  • The ws servers (both Bun and Node) see latency p95 creep up to 15‑20 ms.
  • The other two configurations stay rock‑solid at ≈ 5 ms.

800 – 1 500 Clients

800‑1500 Graph

  • node‑ws explodes – latency spikes around 1 k clients.
  • bun‑ws and bun‑native follow, but later.
  • Event‑loop lag follows the same pattern.
  • CPU hits 100 % for node‑ws at ~1 k clients, bun‑ws at ~1.2 k, bun‑native at ~1.3 k.
  • node‑uws stays at ≈ 80 % CPU even at 1.5 k clients, rising at a similar rate to the others.
  • Throughput becomes unstable for all except node‑uws.
  • Memory: node‑uws shows a strange dip before climbing back; the Bun servers use less memory overall – Bun’s memory management is impressive.

Bottom line: node‑ws cannot handle the load, while node‑uws remains chill with flat latency and low event‑loop lag.

1 500 – 2 100 Clients

1500‑2100 Graph

  • node‑ws, bun‑ws, and bun‑native are effectively dead – latency through the roof.
  • node‑uws still runs at a constant ≈ 80 % CPU with low latency.
  • Note: node‑ws’s p95 latency appears flat for a while (lower than bun‑native) because metrics stopped being recorded; Artillery’s pushgateway shows the last recorded value until a new one arrives.

2 100 – 3 300 Clients

2100‑3300 Graph

  • node‑uws continues to hold steady – CPU stays around 80 %, latency remains low, and the server stays responsive.
  • All other configurations are overwhelmed – CPU at 100 %, massive latency spikes, and frequent crashes.

Takeaways

ConfigurationStrengthsWeaknesses
node‑uwsBest overall performance, stable latency, low event‑loop lag, handles > 3 k clients with ~80 % CPU.Slightly higher memory usage than Bun.
bun‑nativeExcellent memory footprint, low event‑loop lag at low client counts.Cannot sustain high client counts; latency rises sharply after ~1.3 k clients.
bun‑wsLow event‑loop lag early on, good memory usage.Fails earlier than bun‑native under load.
node‑wsSimple, default option.Crashes around 1 k clients; high CPU and latency.

Bottom line: If you need raw performance for a high‑traffic real‑time app, Node + uWebSockets.js is the clear winner. If memory is a primary concern and you’re comfortable with Bun, the Bun native engine is a solid secondary choice – just keep an eye on its scalability limits.

![Benchmark Graph](https://media2.dev.to/dynamic/image/width=800,height=,fit=scale-down,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj46ux807ur388inzxs51.png)

`node-uws` is the only one still standing. It's at ~90‑100 % CPU now.

Throughput starts to become less stable, and latency slowly creeps up. It goes dead after ~3 250 clients.  
We can say it could handle a solid 3 000‑3 100 concurrent clients just fine—more than double the next best (`bun-native`).

Full Graph

Full Graph

CSV files are available here on GitHub.

Bun, What Happened?

It’s a surprise to see bun-native get absolutely destroyed here, because the Bun WebSocket server uses uWebSockets under the hood.

I don’t exactly know the reason why, but it might be because @socket.io/bun-engine is still very new (v0.1.0) and may have inefficiencies and abstraction layers that add overhead.

Back to Blog

Related posts

Read more »

Websockets with Socket.IO

This post contains a flashing gif. HTTP requests have taken me pretty far, but I’m starting to run into their limits. How do I tell a client that the server upd...