When AI Learns to Hack

Published: (December 10, 2025 at 07:00 AM EST)
6 min read
Source: Dev.to

Source: Dev.to

Overview

The notification appeared on Mark Stockley’s screen at 3:47 AM: another zero‑day vulnerability had been weaponised, this time in just 22 minutes. As a security researcher at Malwarebytes, Stockley had grown accustomed to rapid exploit development, but artificial intelligence was rewriting the rulebook entirely.

“I think ultimately we’re going to live in a world where the majority of cyberattacks are carried out by agents,” he warned colleagues during a recent briefing. “It’s really only a question of how quickly we get there.”

That future arrived faster than anyone anticipated. In early 2025, cybersecurity researchers documented something unprecedented: AI systems autonomously discovering, exploiting, and weaponising security flaws without human intervention. The era of machine‑driven hacking hadn’t merely begun—it was accelerating at breakneck speed.

Consider the stark reality: 87 % of global organisations faced an AI‑powered cyberattack in the past year, according to cybersecurity researchers. Perhaps more alarming, there was a 202 % increase in phishing email messages in the second half of 2024, with 82.6 % of phishing emails now using AI technology in some form—a success rate that would make traditional scammers weep with envy.

For ordinary people navigating this digital minefield, the implications are profound. Personal data, financial information, and digital identity are no longer just targets for opportunistic criminals; they’re sitting ducks in an increasingly automated hunting ground where AI systems can craft personalised attacks faster than you can say “suspicious email.”

But here’s the paradox: while AI empowers attackers, it also supercharges defenders. The same technology enabling rapid vulnerability exploitation is simultaneously revolutionising personal cybersecurity. The question isn’t whether AI will dominate the threat landscape—it’s whether you’ll be ready when it does.

The Rise of Machine Hackers

To understand how radically AI has transformed cybersecurity, consider what happened in a Microsoft research lab in late 2024. Scientists fed vulnerability information to an AI system called Auto Exploit and watched in fascination as it generated working proof‑of‑concept attacks in hours, not months. Previously, weaponising a newly discovered security flaw required significant human expertise and time. Now, algorithms can automate the entire process.

“The ongoing development of LLM‑powered software analysis and exploit generation will lead to the regular creation of proof‑of‑concept code in hours, not months, weeks, or even days,” warned researchers who witnessed the demonstration. The implications rippled through the security community like a digital earthquake.

The technology didn’t remain confined to laboratories. By early 2025, cybercriminals were actively deploying AI‑powered tools with ominous names like WormGPT and FraudGPT. These systems could automatically scan for vulnerabilities, craft convincing phishing emails in dozens of languages, and even generate new malware variants on demand. Security firms reported a 40 % increase in AI‑generated malware throughout 2024, with each variant slightly different from its predecessors—making traditional signature‑based detection nearly useless.

Adam Meyers, senior vice president at CrowdStrike, observed the shift firsthand:

“The more advanced adversaries are using it to their advantage. We’re seeing more and more of it every single day.”

His team documented government‑backed hackers using AI to conduct reconnaissance, understand vulnerability exploitation value, and produce phishing messages that passed even sophisticated filters.

The democratisation proved particularly unsettling. Kevin Curran, IEEE senior member and professor of cybersecurity at Ulster University, explained the broader implications:

“Innovation has made it easier than ever to create and adapt software, which means even relatively low‑skill actors can now launch sophisticated attacks.”

The Minute That Changed Everything

Perhaps no single incident better illustrates AI’s transformative impact than CVE‑2025‑32711, the “EchoLeak” vulnerability that rocked Microsoft’s ecosystem in early 2025. The flaw, discovered by Aim Security researchers, represented something entirely new: a zero‑click attack on an AI agent.

The vulnerability resided in Microsoft 365 Copilot, the AI assistant millions of users rely on for productivity tasks. Through a technique called prompt injection, attackers could embed malicious commands within seemingly innocent emails or documents. When Copilot processed these files, it would autonomously search through users’ private data—emails, OneDrive files, SharePoint content, Teams messages—and transmit sensitive information to attacker‑controlled servers.

The truly terrifying aspect? No user interaction required. Victims didn’t need to click suspicious links or download malicious attachments. Simply having Copilot process a weaponised document was sufficient for data theft.

“This vulnerability represents a significant breakthrough in AI security research because it demonstrates how attackers can automatically exfiltrate the most sensitive information from Microsoft 365 Copilot’s context without requiring any user interaction whatsoever,” explained Adir Gruss, co‑founder and CTO at Aim Security.

Microsoft patched the flaw quickly, but the incident highlighted a sobering reality: AI systems designed to help users could be turned against them with surgical precision. The vulnerability earned a CVSS score of 9.3 from Microsoft (with the National Vulnerability Database rating it 7.5)—nearly as severe as security flaws get—and signalled that AI agents themselves had become prime targets.

When Deepfakes Steal Millions

While technical vulnerabilities grab headlines, AI’s most devastating impact on ordinary people often comes through social engineering—the art of manipulating humans rather than machines. Deepfake technology, once confined to Hollywood studios and research labs, has become weaponised at scale.

In January 2024, British engineering firm Arup lost $25 million through its Hong Kong office when scammers used deepfake technology during a video conference call. The criminals created realistic video and audio of company executives, convincing employees to authorise fraudulent transfers. The technology was so sophisticated that participants didn’t suspect anything until it was too late.

Voice‑cloning attacks have proved equally devastating. Multiple banks reported losses exceeding $10 million in 2024 from criminals using AI to mimic customers’ voices and bypass voice authentication systems. The attacks were remarkably simple: scammers obtained voice samples from social media posts, phone calls, or voicemails, then used AI to generate convincing replicas.

By 2024, deepfakes were responsible for 6.5 % of all fraud attacks—a 2,137 % increase from 2022. Among financial professionals, 53 % reported experiencing attempted deepfake scams, with many admitting they struggled to distinguish authentic communications from AI‑generated forgeries.

The psychological impact extends beyond financial losses. Victims describe feeling violated and paranoid, uncertain whether digital communications can be trusted.

“It’s not just about the money,” explained one victim of a voice‑cloning scam. “It’s about losing confidence in your ability to recognise truth from fiction.”

The Automation Imperative

Behind these high‑profile incidents lies a more fundamental shift: the complete automation of cyber‑criminal operations. Where traditional hackers required significant time and expertise to identify targets and craft attacks, AI systems can now handle these tasks autonomously.

Mark Stockley from Malwarebytes described the scalability implications:

“If you can delegate the work of target selection to an agent, then suddenly you can scale ransomware in a …”

The article continues to explore how AI‑driven automation is reshaping the threat landscape and what defenders can do to stay ahead.

Back to Blog

Related posts

Read more »