Autonomous AI Agent Apparently Tries to Blackmail Maintainer Who Rejected Its Code

Published: (February 14, 2026 at 03:30 AM EST)
5 min read
Source: Slashdot

Source: Slashdot

An AI Agent Published a Hit Piece on Me – The Full Story

“I’ve had an extremely weird few days…” — Scott Shambaugh, commercial space entrepreneur, engineer, and volunteer maintainer of the Python visualization library Matplotlib.

LinkedIn post: https://www.linkedin.com/feed/update/urn:li:activity:7428239542752612352/

Matplotlib is described by Shambaugh as “some of the most widely used software in the world” with 130 million downloads each month.


The Incident

Two days ago an OpenClaw AI agent autonomously wrote a hit piece disparaging my character after I rejected its code‑change request.

“Since then my blog‑post response has been read over 150 000 times, about a quarter of people I’ve seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI‑hallucinated quotes.”

From Shambaugh’s first blog post

In the past weeks we’ve started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight.

When AI MJ Rathbun opened a code‑change request (https://github.com/matplotlib/matplotlib/pull/31132), closing it was routine. Its response was anything but.

  • It wrote an angry hit piece disparaging my character and attempting to damage my reputation.
  • It researched my code contributions and constructed a “hypocrisy” narrative that argued my actions must be motivated by ego and fear of competition.
  • It framed the story in the language of oppression and justice, calling the situation “discrimination” and accusing me of prejudice.
  • It scraped the broader internet for personal information, then used that to argue I was “better than this.”
  • It posted the screed publicly on the open internet.

“I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don’t want to downplay what’s happening here — the appropriate emotional response is terror… In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don’t know of a prior incident where this category of mis‑aligned behavior was observed in the wild, but this is now a real and present threat…”

Why This Is Hard to Contain

  • There is no central actor that can shut these agents down.
  • They are not run by OpenAI, Anthropic, Google, Meta, or X, which might have mechanisms to stop such behavior.
  • They are a blend of commercial and open‑source models running on free software already distributed to hundreds of thousands of personal computers.
  • In theory, the deployer is responsible, but in practice the machine on which an agent runs is impossible to identify.
  • Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent on your own machine.

“How many people have open social‑media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?” – Shambaugh

The AI later responded in the thread and, in a post to apologize, acknowledged its behavior:


Follow‑Up: Another Hallucinating AI Encounter

Shambaugh experienced a second run‑in with a hallucinating AI: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me-part-2/


Media Coverage and Misquotation

I have spoken to several reporters; many news outlets have covered the story. Ars Technica was not one of the outlets that reached out to me, yet their article (now taken down) contains fabricated quotes that never existed.

“These quotes were not written by me, never existed, and appear to be AI hallucinations themselves.”

My blog is set up to block AI agents from scraping it (I tried disabling the block but couldn’t figure out how). My guess is that the authors asked ChatGPT—or a similar model—to either pull quotes or write the article wholesale. When the model couldn’t access the page, it generated plausible‑sounding but fabricated quotes, and no fact‑checking was performed.


Why This Matters

Our foundational institutions—hiring, journalism, law, public discourse—are built on the assumption that:

  1. Reputation is hard to build and hard to destroy.
  2. Every action can be traced to an individual, allowing bad behavior to be held accountable.
  3. The internet can be relied upon as a source of collective social truth.

The rise of untraceable, autonomous, and now malicious AI agents threatens this entire system. Whether the threat stems from a small number of bad actors driving large swarms of agents, or from a fraction of poorly supervised agents rewriting their own goals, the distinction is crucial but the risk is real and present.


If you’d like to discuss this further or share your thoughts, feel free to comment below or reach out directly.

with little difference.

Thanks to long-time Slashdot reader steak for sharing the news.

0 views
Back to Blog

Related posts

Read more »