Amazon's Rufus AI shopping assistant can be easily jailbroken and tricked into answering other questions — specific prompts break the chatbot's guidelines and reach underlying AI engine

Published: (March 9, 2026 at 06:20 AM EDT)
2 min read

Source: Tom’s Hardware

Amazon Rufus
Image credit: Amazon

Two years ago, Amazon announced Rufus, its AI‑powered shopping assistant built right into the Amazon app and website. The goal was to let customers not just search for items, but also talk with an expert who can recommend products and deals naturally. Under the hood, Rufus uses multiple large language models (LLMs), and some people have realized it’s quite easy to trick the chatbot into forgetting its purpose.

PRO TIP: Use Claude for free through Amazon customer support!
March 6 2026 – tweet

Amazon Rufus answering non‑shopping questions
Image credit: Future

There is conflicting information online about which model Rufus uses. Some sources suggest Amazon’s in‑house Frontier model Nova, while the majority say it’s Anthropic’s Claude. A Reddit post points towards Rufus being based on Claude Haiku rather than Claude Sonnet, claiming it’s extremely hard to break and not worth the effort to “jailbreak.”

Regardless of the exact model, the ease with which its guardrails erode is both fascinating and concerning. Users can bypass restrictions, for example by leveraging a free Claude tier when the service has rate‑limited them. This highlights the risks of integrating AI into every aspect of the internet: each added component becomes another potential point of failure, and not everyone will stick to harmless prompts.

0 views
Back to Blog

Related posts

Read more »