How Many Rs Are There Really In Strawberry? AI Is So Stupid

Published: (February 16, 2026 at 10:16 AM EST)
3 min read
Source: Dev.to

Source: Dev.to

Introduction

How many Rs are there in the word strawberry? AI can’t tell you—at least not reliably. Screenshots, Reddit threads, and smug tweets show models tripping over simple letters like toddlers. The meme has become a shorthand for a deeper limitation: AI still struggles with basic counting and precise rendering.

Notable Failure Modes

Counting letters

Even in 2025, many models cannot consistently count the Rs in “strawberry.” Asking for a seahorse emoji can send them into an apparent existential crisis.

Rendering objects

AI image generators still fail to render a wine glass that is completely full. The model interpolates from a dataset that rarely contains a perfectly filled glass, leading to unrealistic results.

Emoji handling

Seahorse emojis cause chaos because the internet collectively decided such an emoji existed. The model learns it’s plausible, inserts it, then realizes it doesn’t exist and loops endlessly.

Code generation

AI‑generated code often contains errors. The training data includes Stack Overflow posts, blogs, gists, half‑finished examples, and hacks. Without explicit constraints, the model reproduces the same mistakes humans make.

Why These Glitches Occur

  • Training data bias – Models learn from human‑generated content, inheriting its imperfections.
  • Lack of explicit constraints – Without clear prompts or safety checks, the model follows the most likely pattern, even if it’s wrong.
  • Interpolation, not understanding – When a visual concept (e.g., a perfectly full wine glass) is under‑represented, the model fills the gap with an approximation rather than a correct rendering.

Implications for Developers

  1. Treat AI as an unreliable intern – Prompt heavily, guide explicitly, and never trust output blindly.
  2. Implement multi‑layer validation – Pass results through multiple agents to surface bugs, performance issues, and security concerns.
  3. Design for failure – Assume the system will underperform or outgrow your current solution. Build experiences that absorb failures gracefully.

The broader impact

  • Developer knowledge bases – Stack Overflow traffic has declined as ChatGPT offers faster, contextual answers.
  • Creative content – Music, images, and stock photography are increasingly AI‑generated, blurring the line between human and machine output.
  • Trust erosion – When photos, videos, reviews, or faces can be fabricated, downstream trust in digital media shifts dramatically.

Lessons and Recommendations

  • Convert AI outputs into constrained state machines to enforce safety and correctness.
  • Avoid treating “textbox‑and‑send” as a product strategy; plan for orchestration, monitoring, and rapid iteration.
  • Stay ahead of model churn – The model you used last month may be obsolete today; continuously update your tooling and processes.
  • Focus on robust orchestration – The layer that coordinates AI models must evolve as fast as the models themselves.

Conclusion

Laughing at AI’s mistakes is entertaining, but it can also distract from the real work of building resilient systems. If you’re building with AI, design for failure, assume the technology will outgrow you mid‑flight, and plan accordingly. And maybe, just maybe, stop counting Rs.

0 views
Back to Blog

Related posts

Read more »