How many Rs are in the word strawberry?
Ask ChatGPT to count the letter “R” in the word “strawberry.” It sounds trivial, but even this straightforward task can trip up an advanced model. Despite the hype, most AI systems don’t “see” or “understand” language like humans do. Instead, AI is a machine trained to recognize patterns from massive datasets, and sometimes those patterns can produce odd or misleading results. When AI is trained to give answers quickly, it can end up delivering “quick scan” responses that miss tiny details, like an extra “R” in “strawberry.”
Why do these kinds of tiny errors happen? Language models, which power many AI applications, are trained on billions of words to recognize patterns, predict responses, or answer questions. They process this data through a combination of statistics, pattern-matching, and probability calculations. While AI is excellent at recognizing trends and recurring words or structures, it doesn’t truly understand what it “sees.” For example, a language model trying to answer the “strawberry” question might quickly scan the word, counting two Rs instead of three. Just as humans make quick-scan errors when skimming text, AI too can overlook details—especially when it’s focused on speed over accuracy.
Some people see these mistakes and assume they mean AI is fundamentally unreliable, or even dangerous. Others might feel uneasy about AI making such errors, worrying that this technology might make more significant mistakes with higher stakes. But the reality is simpler—and much less dramatic. AI systems are typically designed to operate within specific boundaries. A model built to answer casual questions is different from one used in critical areas, like driving or healthcare. Each type of AI goes through different levels of testing based on the stakes of its function. In language models and similar tools, small mistakes are part of the process, and that’s acceptable. The AI you might use to draft an essay doesn’t need to be flawless because you’re there to double-check it. But the AI assisting in hospitals or guiding vehicles undergoes far stricter checks, multiple levels of testing, and monitoring to make sure errors are minimized.
This perspective helps clarify whether we should fear AI. Many people—especially young people—worry that AI might “take over,” replace human jobs, or even become dangerous. But AI isn’t a sentient force making its own decisions; it’s much closer to an advanced calculator than to a “thinking” being. AI has impressive power to process vast data and find patterns humans might miss, but it doesn’t actually “know” or “understand” the world. Instead, it follows the patterns it’s learned, offering useful insights but no genuine understanding. It’s helpful to think of AI as a highly efficient assistant, not as an all-knowing superintelligence.
One of AI’s real strengths is in handling enormous datasets and repetitive tasks that humans would find slow or tedious. It’s powerful for analyzing trends, spotting disease patterns, or generating language. But what AI lacks is human-style reasoning, understanding, and intuition. It’s not about to develop self-awareness or make unpredictable decisions on its own. In the “strawberry” example, the AI’s mistake is a harmless reflection of its design limitations—it’s bound by its programming, unable to grasp deeper meaning, which actually makes it safer and more predictable. As long as we know these limits, we can make use of AI without worrying about it acting autonomously.
When we understand the limits of AI, we’re in a better position to make the most of it. In fields ranging from healthcare to education and the arts, AI works best as a collaborative tool that amplifies human skills rather than replacing them. Just as calculators freed us from basic arithmetic so we could focus on complex math, AI handles vast information flows or repetitive tasks, freeing people to focus on work that requires creativity, empathy, or judgment.
Small mistakes like the “strawberry” blunder remind us that AI isn’t truly intelligent. It’s a sophisticated tool built to recognize patterns quickly but with no real grasp of meaning. So if you see an AI miscount letters or make a silly mistake, take it as a sign that we’re still in control. We’re learning how best to use AI, setting the boundaries that keep it safe and effective.
As technology progresses, AI may get better at avoiding simple errors, but that doesn’t mean it’s moving closer to consciousness or autonomy. AI isn’t coming for our creativity, our judgment, or our control. It’s simply here to help us manage complex data and take over the routine tasks of life, giving us more time and space for everything else. If AI misses a detail here or there, it’s a reminder that, while powerful, it’s still just a tool we use to navigate an increasingly complex world.