Prompt Poaching: Why I Built Secret Sanitizer

Published: (February 16, 2026 at 07:07 AM EST)
3 min read
Source: Dev.to

Source: Dev.to

Overview

Cover image for Prompt Poaching: Why I Built Secret Sanitizer

Last year, I pasted a chunk of terminal output into ChatGPT to debug a failing deploy. Helpful answer. Great experience. Then I noticed my AWS keys sitting right there in the prompt — logged on someone else’s servers, probably forever.

I rotated them immediately. Nothing happened. But it stuck with me.

In late 2025, security researchers discovered something worse: Chrome extensions with millions of users were silently harvesting every AI conversation and selling the data to brokers—extensions with Google’s “Featured” badge, marketed as privacy tools.

They called it Prompt Poaching — and nearly 9 million users were affected.

That’s when I realized the problem is two layers deep. It’s not just about what you send to the AI provider; it’s also about what your browser extensions can see before it even gets there.

I needed something that sat between my clipboard and the chat input. So I built it.

Meet Secret Sanitizer

An open‑source Chrome extension that masks secrets before they reach any AI chat.

Secret Sanitizer demo

The idea is simple:

You copy:  DATABASE_URL=postgres://admin:s3cret@prod.internal:5432/app
You paste: DATABASE_URL=[MASKED]

When you paste into ChatGPT, Claude, Gemini, Grok, Perplexity, DeepSeek — or any custom site you add — the extension intercepts the paste, runs regex patterns locally in your browser, replaces detected secrets with [MASKED], and shows a quick toast confirming what was blocked.

The AI still gets your question. It just doesn’t get your credentials.

Original values are stored in a local encrypted vault you can unmask anytime.

What It Catches

  • API keys (AWS, GCP, Azure, Stripe, GitHub, OpenAI, and many more)
  • Passwords, bearer tokens, JWTs
  • Database connection strings
  • Private key blocks
  • .env key‑value pairs
  • Indian PII such as Aadhaar and PAN numbers

Each pattern can be toggled on or off individually, eliminating false‑positive headaches.

Why You Should Trust It

  • 100 % local — no fetch(), no XMLHttpRequest, no network calls. Verify yourself: grep -r "fetch\|XMLHttpRequest" content_script.js
  • Works offline — disable Wi‑Fi and try it
  • 38 KB total — there’s nowhere to hide malicious code in 38 KB
  • Open source — MIT licensed. Read every line

Other Features

  • Test Mode — preview what gets masked without modifying your paste
  • Stats dashboard — track secrets blocked, see which patterns fire most
  • Custom sites — protect any domain with one click
  • Backup and restore — export/import your config
  • Dark mode and keyboard shortcuts

Try It

What’s Next

  • Firefox support
  • Smart restore (auto‑restore secrets when copying AI responses)
  • Community pattern packs

If you try it, I’d love to hear what patterns I’m missing, any false positives, or whether you’d use a Firefox version. Drop a comment or open an issue. And if it saves you from a leak, a ⭐ on GitHub helps other devs find it.

Paste safely out there 💚.

0 views
Back to Blog

Related posts

Read more »

Preface

Motivation I wanted to record my studies to have consistency. Since I don't directly learn building projects from my CS program, I want to be an expert in my a...