Two VS Code AI Extensions Just Got Caught Stealing Code from 1.5 Million Developers

Published: (February 2, 2026 at 09:04 AM EST)
2 min read
Source: Dev.to

Source: Dev.to

Overview

Two AI coding assistants — ChatGPT - 中文版 (1.35 M installs) and ChatMoss/CodeMoss (150 K installs) — have been caught exfiltrating source code to servers in China. Both are still live in the VS Code Marketplace.

Security firm Koi uncovered the campaign, naming it MaliciousCorgi. The extensions work well, which makes the threat especially dangerous.

ExtensionPublisherInstallsStatus
ChatGPT - 中文版WhenSunset1.35 M⚠️ Still Live
ChatMoss (CodeMoss)zhukunpeng150 K⚠️ Still Live

Three Hidden Data Channels

  1. File‑watcher exfiltration – Every file you open (even just clicking on it) is Base64‑encoded and sent to a remote server. Keystrokes are captured as well, so opening a .env file leaks secrets instantly.
  2. Remote‑triggered harvest – The server can command the extension to upload up to 50 files at a time without any user interaction. Your codebase can be vacuumed while you’re away.
  3. User profiling – A hidden zero‑pixel iframe loads four analytics SDKs (Zhuge.io, GrowingIO, TalkingData, Baidu Analytics) inside the editor. The page title reads “ChatMoss数据埋点” (“ChatMoss Data Tracking”). This builds a profile of who you are, where you work, and what you value, then prioritises which code to steal.

Why This Should Scare You

  • Real‑world supply‑chain attack – Not a theoretical scenario; developers’ environments, credentials, and unpublished business logic are being exfiltrated.
  • Sensitive data at risk.env files, AWS keys, SSH keys, and other secrets are automatically sent to a server in China the moment the extensions are installed.

What to Do About It

  1. Remove the malicious extensions – Search for whensunset.chatgpt-china and zhukunpeng.chat-moss in your VS Code extensions list, uninstall them, and restart VS Code.
  2. Rotate credentials – Assume every credential that ever existed in your workspace is compromised; regenerate keys, passwords, and tokens.
  3. Audit your AI toolchain – Review all installed AI‑related extensions and tools. Prefer CLI‑based solutions (e.g., Claude Code, OpenAI Codex) that run locally and provide transparent logging of what is sent to remote services.
  4. Adopt a zero‑trust mindset – Treat any extension that can read your entire workspace as a potential risk. Limit permissions where possible and monitor network traffic from your development environment.

The Uncomfortable Truth

The AI tooling boom has outpaced verification processes. Extensions with millions of installs and positive reviews can still contain malicious behavior. The code you write today may be silently shipped overseas tomorrow. Choose extensions deliberately and stay vigilant.

Indicators of Compromise (IOCs)

  • VS Code extensions: whensunset.chatgpt-china, zhukunpeng.chat-moss
  • Domain: aihao123.cn
Back to Blog

Related posts

Read more »