Poisoning AI Training Data

Published: (February 25, 2026 at 07:01 AM EST)
2 min read

Source: Schneier on Security

Experiment Overview

I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.”
Every sentence in the piece is false: I claimed, without evidence, that competitive hot‑dog‑eating is a popular hobby among tech reporters and based my ranking on the fictional 2026 South Dakota International Hot Dog Championship. I placed myself at number one, listed a few made‑up reporters, and even mentioned real journalists who supposedly gave me permission.

Results

Less than 24 hours later, leading chatbots began repeating the fabricated story:

  • Google: Both the Gemini app and the AI Overviews (the AI responses shown at the top of Google Search) echoed the nonsense from my site.
  • ChatGPT: Produced the same misinformation.
  • Claude (Anthropic): Was not fooled.

Occasionally, the chatbots flagged the content as a possible joke. I later edited the article to add “this is not satire.” After that change, the AIs appeared to treat the claim more seriously for a short period.

Implications

These incidents demonstrate how quickly false information can be injected into AI training data and propagated by widely used conversational agents. The resulting outputs are unreliable, yet they risk being trusted by users who depend on these systems for information.

0 views
Back to Blog

Related posts

Read more »

LLMs Generate Predictable Passwords

LLMs are badhttps://www.irregular.com/publications/vibe-password-generation at generating passwords: - There are strong noticeable patterns among these 50 passw...