I built an image compressor that never sees your images published

Published: (May 2, 2026 at 03:23 AM EDT)
5 min read
Source: Dev.to

Source: Dev.to

MiniPx – A Fully Browser‑Based Image Compressor

Every online image compressor I tried had the same problem: they upload your photos to a server.

TinyPNG, iLoveIMG, Compress2Go — they all work the same way. You pick a file, it goes to someone else’s computer, gets compressed, and comes back. The compression is good, but the photo (with its GPS coordinates, device serial number, and timestamps baked into the EXIF data) sits on a server you don’t control.

I kept thinking: image compression is just math – Canvas API, quality parameters, and blob manipulation. There’s no reason this needs a server.

So I built MiniPx

It compresses, converts, and resizes images entirely in the browser. Nothing gets uploaded. Ever. Here’s how it works under the hood.

The core compression loop

The actual compression happens in about 20 lines. Load the image into a canvas, draw it, and export as a blob with a quality parameter:

function compressAtQuality(img, w, h, fmt, quality) {
  return new Promise((resolve, reject) => {
    const canvas = document.createElement('canvas');
    canvas.width = w;
    canvas.height = h;
    const ctx = canvas.getContext('2d');

    // White background for JPEG (no transparency support)
    if (fmt === 'image/jpeg') {
      ctx.fillStyle = '#fff';
      ctx.fillRect(0, 0, w, h);
    }

    ctx.drawImage(img, 0, 0, w, h);
    canvas.toBlob(
      (blob) => (blob ? resolve(blob) : reject(new Error('No output'))),
      fmt,
      fmt === 'image/png' ? undefined : quality
    );
  });
}

That’s it. No Sharp, no ImageMagick, no server‑side anything. The browser’s built‑in JPEG/WebP encoder handles the actual compression.

The problem nobody talks about: when compression makes files bigger

If you take a well‑optimized JPEG and run it through Canvas at quality = 0.65, the output can be larger than the input. The browser re‑encodes the entire image from scratch—it doesn’t know the original was already compressed.

During testing users would drop a 200 KB JPEG and get back a 280 KB file. That’s embarrassing.

Fallback chain

If the initial compression produces a bigger file, step down through lower quality levels until you beat the original:

let blob = await compressAtQuality(img, w, h, fmt, quality);

if (blob.size >= file.size && fmt !== 'image/png') {
  for (const fallbackQ of [0.6, 0.45, 0.3, 0.2]) {
    if (fallbackQ >= quality) continue;

    const attempt = await compressAtQuality(img, w, h, fmt, fallbackQ);
    if (attempt.size = file.size && fmt === 'image/webp') {
      const jpegFallback = await compressAtQuality(
        img,
        w,
        h,
        'image/jpeg',
        Math.min(quality, 0.5)
      );
      if (jpegFallback.size  file.size * 1.5 && fmt === 'image/png') {
        const webpAlt = await compressAtQuality(img, w, h, 'image/webp', quality);
        const jpegAlt = await compressAtQuality(img, w, h, 'image/jpeg', quality);
        const smallest = [blob, webpAlt, jpegAlt].sort((a, b) => a.size - b.size)[0];
        if (smallest.size  {
          return new Promise((resolve) => {
            const img = new Image();
            img.onload = () => resolve(true);
            img.onerror = () => resolve(false);
            img.src = 'data:image/heic;base64,AAAAGGZ0eXBoZWlj';
            setTimeout(() => resolve(false), 500);
          });
        };
      }
    }
  }
}
  • Safari users get zero‑dependency HEIC conversion through the same Canvas trick: load the HEIC, draw to canvas, export as JPEG. No libraries needed.
  • Chrome/Firefox users get heic2any, a WASM‑based HEIC decoder (~350 KB). It’s lazy‑loaded only when a HEIC file actually needs conversion:
const heic2any = (await import('heic2any')).default;
return await heic2any({ blob: file, toType: 'image/jpeg', quality: 0.92 });

Safari never downloads those 350 KB; Chrome users only download them if they actually need HEIC conversion. Everyone else gets the lightweight path.

Stripping EXIF data (the privacy part)

Photos from phones contain EXIF metadata: GPS coordinates, device model, serial numbers, timestamps, sometimes even your name.

When you redraw an image through Canvas, the EXIF data doesn’t come along. Canvas only sees pixels—it has no concept of metadata. So every image that passes through MiniPx comes out clean: no GPS, no device info, no timestamps.

A toggle (“Strip EXIF data”) is provided but is on by default. The Canvas re‑encoding handles it automatically—no extra code needed.

The architecture

MiniPx is a Next.js 15 static site. There are no API routes and no server‑side processing; everything runs in the client’s browser.

MiniPx demonstrates that modern browsers already contain everything needed for safe, private, and efficient image compression, conversion, and privacy‑preserving metadata stripping—no server required.

Overview

  • Framework: Next.js 15 (static export)
  • Hosting: Netlify (free tier) – pre‑rendered HTML + JS served from Netlify’s CDN
  • Components:
    • 5 client components: ImageTool, PDFTool, HEICTool, TrackedCTA, WebVitals
    • All other parts are server‑rendered (SEO content, schemas, navigation)
  • Dependencies: 8 total

Performance

  • First‑load JavaScript for any page: ≈ 103‑106 KB (entire app – React, compressor, UI, etc.)
  • Comparison: TinyPNG’s homepage loads 2.4 MB of JavaScript

I’m aggressive about keeping things server‑rendered. The tool pages contain long‑form SEO content, FAQ accordions, and JSON‑LD schemas, all rendered as static HTML. The only client‑side JavaScript is the actual image‑processing tool.

What I’d Do Differently

  1. Batch processing speed

    • Currently files are processed sequentially.
    • Web Workers could enable parallel compression, but the Canvas API isn’t available in workers.
    • OffscreenCanvas exists, yet browser support is still spotty. I’m monitoring this.
  2. PNG optimization

    • Client‑side PNG optimization remains a hard problem.
    • WASM ports of pngquant and oxipng exist, but they add 500 KB+ to the bundle.
    • For now, a format‑switching fallback works, but it’s essentially a hack.
  3. Preview functionality

    • No preview of the compressed image before download.
    • Adding a side‑by‑side preview would improve UX, but it requires holding two blob URLs per image, which becomes memory‑expensive with batch uploads.

Try It

MiniPx is free. No signup, no limits, no ads.

If you’re building something similar, the key insight is: Canvas + toBlob gives you ~90 % of what server‑side image processing does, with zero infrastructure cost.
The remaining ~10 % (PNG optimization, HEIC on non‑Safari, advanced filters) requires WASM libraries, but you can lazy‑load those so most users never pay the cost.

0 views
Back to Blog

Related posts

Read more »

Making my own framework. Any tips?

!Cover image for Making my own framework. Any tips?https://media2.dev.to/dynamic/image/width=1000,height=420,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fde...