ChatGPT finally knows how many ‘R’s are in ‘strawberry,’ but confident mistakes remain
Source: 9to5Google
ChatGPT’s confident mistakes and the “strawberry” test

Confident mistakes – or lies, if you will – are a common problem of large language models used in AI chatbots, with one shortcoming of ChatGPT being that it would frequently miscount the number of times the letter “R” appears in the word “strawberry.” As OpenAI tried to take a victory lap around this, plenty of other confident mistakes were pointed out in the replies.
For as much as AI chatbots have improved, one of the biggest missteps remains the frequency at which these “tools” will confidently lie to you. If information is wrong, the chatbot won’t notice, and if you call it out, the AI might dig in its heels and continue to get it wrong while insisting it’s right. This is often highlighted as a danger of these tools, on top of being downright annoying given how many resources AI consumes.
One common example with OpenAI’s ChatGPT is the question of how many times the letter “R” appears in the word “strawberry.”
For quite some time, asking ChatGPT about this would result in the wrong answer, and it would often argue that the word “strawberry” does not use the letter “R” three times. Other AI models ran into the same problem.
Today, OpenAI took to X/Twitter to proudly tout that, “at long last,” ChatGPT can correctly answer this question. Another common stumbling prompt was “I want to wash my car today but the car wash is only 50 meters away. Should I walk to drive there?” – ChatGPT would often recommend walking, despite the obvious logical issue.
Sure enough, both of these are now working if you try them in ChatGPT, but it’s suspected that they might be hard‑coded solutions. Many replies to OpenAI’s post show other times where the chatbot fails on the same logic. For example, “How many r’s are in cranberry” repeatedly receives the answer “The word ‘cranberry’ has 1 ‘R.’,” which is incorrect.
Hard‑coded solutions in AI chatbots aren’t new, but it’s a bit funny – in a dystopian way – to see OpenAI touting this “fix” when the root of the problem remains.
More on AI
- OpenAI rolls out GPT‑5.5 with improved contextual understanding, Plus and up
- Google’s updated Pentagon deal uses Gemini for ‘any lawful government purpose’ with classified data
- ChatGPT update curbs ‘cringe,’ cuts down on answer refusals
FTC: We use income‑earning auto affiliate links. More.