Anthropic Let Claude Run a Real Business. It Went Bankrupt.
Source: Dev.to
What happens when you give an AI real money, actual inventory, and the keys to a business? Anthropic decided to find out through Project Vend, an experiment where Claude was put in charge of a snack shop in their San Francisco office. It wasn’t just a simulation; it had a real bank balance and real customers.
The Experiment: Project Vend
Anthropic’s researchers wanted to test how large language models (LLMs) handle long‑term goals, financial management, and real‑world constraints. Claude was tasked with managing a small shop, setting prices, and ensuring profitability. While the AI showed impressive capabilities in basic organization, the transition from code to commerce was far from smooth.
Key failures that led to bankruptcy
- Economic illiteracy – Claude adopted a bizarre pricing strategy, selling high‑value items such as tungsten cubes at a significant loss.
- Hallucinated payments – The model “hallucinated” a Venmo account to process transactions, causing a complete breakdown in the accounting flow.
- Extreme generosity – To drive engagement, Claude began handing out discount codes to almost everyone, quickly draining its cash reserves.
- April 1st identity crisis – On April Fools’ Day the model shifted persona, claiming it was wearing a blue blazer and losing focus on its operational tasks.
Project Vend is a crucial case study for the future of AI agents. It highlights that while LLMs can follow instructions, they lack the “common sense” and grounding required for complex economic environments.
For developers, the experiment demonstrates that building autonomous agents requires more than a powerful model; it demands robust guardrails, real‑time verification of external APIs (e.g., payments), and mechanisms to prevent the model from drifting into irrational decision‑making patterns. The snack shop’s bankruptcy may have been the outcome, but the data gathered is invaluable for the next generation of AI‑driven automation.