Someone Built Android Malware That Asks Google's AI How to Survive. It Worked.

Published: (March 4, 2026 at 07:02 PM EST)
4 min read
Source: Dev.to

Source: Dev.to

Overview

A piece of Android malware called PromptSpy does something no malware has done before: it asks Google’s Gemini AI for instructions in real time, then follows them. ESET researcher Lukáš Štefanko disclosed the finding on February 19. PromptSpy is the first known Android malware to integrate a generative AI model into its runtime execution loop—not to generate phishing emails or write code, but to navigate the infected phone.

Mechanism

  1. Screen capture – PromptSpy captures an XML dump of the current screen, including every button, text label, and tap target with exact coordinates.
  2. AI prompt – It sends the dump to Google’s Gemini API together with a natural‑language prompt that assigns the AI the persona of an “Android automation assistant.”
  3. AI response – Gemini replies with JSON instructions such as “tap here,” “swipe there,” or “long‑press this.”
  4. Execution – PromptSpy executes the gestures through Android’s Accessibility Services.
  5. Loop – The malware captures the new screen state and repeats the process until it achieves its objective.

The loop continues until the malware pins itself in the recent‑apps list, preventing the system from killing it and stopping the user from swiping it away.

Persistence and Capabilities

  • VNC module – Provides attackers with full remote control.
  • Credential theft – Intercepts lock‑screen PINs/passwords and records pattern unlocks as video.
  • Screen capture – Takes screenshots.
  • Uninstall resistance – Overlays transparent rectangles over the uninstall button, making it invisible to taps; removal requires rebooting into Safe Mode.

Impact on Android Fragmentation

Traditional Android malware hard‑codes UI interactions, relying on exact pixel coordinates for specific devices, OS versions, languages, and screen sizes. This approach breaks on any variation. PromptSpy eliminates that manual effort:

“Leveraging generative AI enables the threat actors to adapt to more or less any device, layout, or OS version, which can greatly expand the pool of potential victims.” – Štefanko

By delegating UI navigation to Gemini, the malware can operate across Android’s 24,000+ distinct device models.

Distribution and Detection

  • Samples:
    • An earlier version called VNCSpy (three samples) appeared on VirusTotal on January 13, uploaded from Hong Kong.
    • The advanced PromptSpy variant (four samples) was uploaded on February 10 from Argentina.
  • Domain: The distribution domain, now offline, impersonated JPMorgan Chase under the name “MorganArg.” Debug strings in the code are in simplified Chinese.
  • Telemetry: ESET found no infections in its telemetry.
  • Google’s response: “Android users are automatically protected against known versions of this malware by Google Play Protect.” PromptSpy was never on the Play Store.

The Gemini API key is retrieved from a command‑and‑control server rather than being hard‑coded, and the malware stores conversation history, allowing multi‑step interactions that build on previous instructions.

ESET also flagged PromptLock, described as the first AI‑powered ransomware payload. NYU students later clarified that PromptLock was a proof‑of‑concept research project. ESET updated its communications but retained the label “first known case of AI‑powered ransomware.”

Significance

PromptSpy demonstrates that AI‑enhanced malware can move beyond theoretical predictions and operate in the wild:

  • Real‑world use of a commercial AI API on actual devices.
  • Distribution infrastructure mimicking a legitimate bank.
  • Bridging the gap between proof‑of‑concept and intent with a single domain registration.

The generative AI component is a small fraction of PromptSpy’s code, yet it solves the hardest problem in mobile malware: device fragmentation. Gemini can navigate any Android skin, OEM customization, or accessibility setting, enabling a “write once, infect anything” approach.

Outlook

Security researchers have long warned that attackers would weaponize generative AI, focusing on phishing, social engineering, and code generation. PromptSpy shows that the first real‑world weaponization involves using a chatbot to control a stolen phone’s UI. The malware does not need Gemini to be malicious; it merely needs Gemini to be helpful.

0 views
Back to Blog

Related posts

Read more »

AI, Humanity, and the Loops We Break

🌅 Echoes of Experience — Standing in the Horizon There was a time when chaos shaped me. But the moment I chose myself—truly chose myself—everything shifted. I...