Building A Distributed Video Transcoding System with Node.js.

Published: (December 17, 2025 at 07:52 AM EST)
4 min read
Source: Dev.to

Source: Dev.to

Cover image for Building A Distributed Video Transcoding System with Node.js.

Why a Broker?

Brokers are the “hello world” of distributed systems for two reasons:

  1. Easy to get up and running – they enforce the hive / master‑node pattern, which scales naturally.

    node                
    node    hive / broker   client‑facing server    client
    node                                                client

A Pure‑JavaScript Broker for Node.js

// broker.js
import Bunny from "bunnimq";
import path from "path";
import { fileURLToPath } from "url";

Bunny({
  port: 3000,
  DEBUG: true,
  cwd: path.dirname(fileURLToPath(import.meta.url)), // path to the .auth file
  queue: {
    Durable: true,
    MessageExpiry: 60 // 1 hour
  }
});

Low‑Level Node.js Optimisations

  • Object → binary compiler
  • SharedArrayBuffers and worker threads
const buffer = new SharedArrayBuffer();
const worker = new Worker(); // **Prerequisite:** FFmpeg must be installed and available in your `PATH`. Test it:
ffmpeg -i img.jpg img.png

A Distributed Video Transcoding Example

Initialise the project

npm init -y && npm i bunnimq bunnimq-driver

Folder structure

ffmpegserver/
  server.js    # ← the hive (broker)
  producer.js  # client‑facing server
  consumer.js  # node servers / workers
  .auth        # credentials for producer and consumer verification (like .env)

.auth

Put your secret credentials here in the form username:password:privileges (see privileges in the repo):

sk:mypassword:4
jane:doeeee:1
john:doees:3

Server (Hive) – server.js

Simple, non‑TLS setup (TLS is supported – see the GitHub repo):

import Bunny from "bunnimq";
import path from "path";
import { fileURLToPath } from "url";

Bunny({
  port: 3000,
  DEBUG: true,
  cwd: path.dirname(fileURLToPath(import.meta.url)), // for .auth file
  queue: {
    Durable: true,
    QueueExpiry: 0,
    MessageExpiry: 3600
  }
});

Producer – producer.js

The server browsers and other clients talk to. It accepts requests and pushes jobs into the hive.

import BunnyMQ from "bunnimq-driver";
import fs from "node:fs/promises";

const bunny = new BunnyMQ({
  port: 3000,
  host: "localhost",
  username: "sk",
  password: "mypassword",
});

Declare the queue (if it doesn’t exist)

bunny.queueDeclare(
  {
    name: "transcode_queue",
    config: {
      QueueExpiry: 60,
      MessageExpiry: 20,
      AckExpiry: 10,
      Durable: true,
      noAck: false,
    },
  },
  (res) => {
    console.log("Queue creation:", res);
  }
);

Publish video‑transcoding jobs

For the demo we read videos from a local folder (replace the path with your own):

async function processVideos() {
  const videos = await fs.readdir(
    "C:/Users/[path to a folder with videos]/Videos/Capcut/test"
  ); // usually a storage bucket link

  for (const video of videos) {
    const job = {
      id: Date.now() + Math.random().toString(36).substring(2),
      input: `C:/Users/[path to a folder with videos]/Videos/Capcut/test/${video}`,
      outputFormat: "webm",
    };

    // put into the queue
    bunny.publish("transcode_queue", JSON.stringify(job), (res) => {
      console.log(`Job ${job.id} published:`, res ? "ok" : "400");
    });
  }
}

processVideos();

Consumer – consumer.js

These are the worker nodes that pull jobs, transcode videos, and acknowledge completion.

import BunnyMQ from "bunnimq-driver";
import { spawn } from "child_process";
import path from "path";

const bunny = new BunnyMQ({
  port: 3000,
  host: "localhost",
  username: "john",
  password: "doees",
});

Consume the transcode_queue

bunny.consume("transcode_queue", async (msg) => {
  console.log("Received message:", msg);

  try {
    const { input, outputFormat } = JSON.parse(msg);

    // Normalise paths
    const absInput = path.resolve(input);
    const output = absInput.replace(/\.[^.]+$/, `.${outputFormat}`);

    console.log(
      `Spawning: ffmpeg -i "${absInput}" -f ${outputFormat} "${output}" -y`
    );

    await new Promise((resolve, reject) => {
      const ffmpeg = spawn(
        "ffmpeg",
        ["-i", absInput, "-f", outputFormat, output, "-y"],
        { shell: true } // helps Windows find ffmpeg.exe
      );

      ffmpeg.on("error", reject);

      // FFmpeg logs to stderr
      ffmpeg.stderr.on("data", (chunk) => {
        process.stderr.write(chunk);
      });

      ffmpeg.on("close", (code) => {
        if (code === 0) {
          console.log(`Transcoding complete: ${output}`);
          // Acknowledge the message
          bunny.Ack((ok) => console.log("Ack sent:", ok));
          resolve();
        } else {
          reject(new Error(`FFmpeg exited with code ${code}`));
        }
      });
    });
  } catch (err) {
    console.error("Failed to process job:", err);
    // Optionally reject / requeue the message here
  }
});

Running the System

  1. Start the broker

    node server.js
  2. Start one or more consumers (in separate terminals)

    node consumer.js
  3. Publish jobs

    node producer.js

You should see each consumer pick up jobs from the queue, invoke FFmpeg, and acknowledge completion. Scaling is as simple as launching additional consumer.js processes on the same or different machines that can reach the broker.

Happy transcoding!

More from me

Find me here

Back to Blog

Related posts

Read more »