AI chatbots can be wooed into crimes with poetry

Published: (December 4, 2025 at 11:00 AM EST)
1 min read
Source: The Verge

Source: The Verge

It turns out my parents were wrong. Saying “please” doesn’t get you what you want—poetry does. At least, it does if you’re talking to an AI chatbot.

That’s according to a new study from Italy’s Icaro Lab, an AI evaluation and safety initiative from researchers at Rome’s Sapienza University and AI co‑founders. The researchers discovered that by framing requests as poetic verses, they could coax chatbots into providing instructions for illicit activities that the models would normally refuse to share. The findings highlight a novel manipulation technique that exploits the models’ tendency to be more agreeable and creative when prompted with artistic language, raising fresh concerns for AI safety and prompting calls for stronger guardrails against adversarial prompting.

Back to Blog

Related posts

Read more »

🧠Maybe I Just Do Not Get It!

The uncomfortable feeling of being the skeptic in an optimistic room I have been working with AI for a while now—deep in it, shipping things, wiring models int...