The AI industry has a big Chicken Little problem
Source: Mashable Tech
Opinion
By Timothy Beck Werth – Tech Editor
February 11 2026

Timothy Beck Werth is the Tech Editor at Mashable, where he leads coverage and assignments for the Tech and Shopping verticals. He has more than 15 years of experience as a journalist and editor, with a focus on consumer technology, smart‑home gadgets, and men’s grooming and style products. Previously he was Managing Editor and then Site Director of SPY.com, a men’s product‑review and lifestyle site. As a writer for GQ he covered everything from bull‑riding competitions to the best Legos for adults, and he has contributed to The Daily Beast, Gear Patrol, and The Awl.

Please, sir, may I have some more AI?
Entrepreneur Matt Shumer’s essay, “Something Big Is Happening,” is going mega‑viral on X, where it has been viewed 42 million times and counting.
The piece warns that rapid advancements in the AI industry over the past few weeks threaten to change the world as we know it. Shumer likens the present moment to the weeks and months preceding the COVID‑19 pandemic, saying most people won’t hear the warning “until it’s too late.”
We’ve heard warnings like this before from AI doomers, but Shumer wants us to believe that this time the ground really is shifting beneath our feet.
“It’s time now,” he writes. “Not in an ‘eventually we should talk about this’ way, but in a ‘this is happening right now and I need you to understand it’ way.”
This Tweet is currently unavailable. It might be loading or has been removed.
Unfortunately for Shumer, we’ve heard warnings like this before. We’ve seen them:
In the long run, some of these predictions may come true—a lot of people who are far smarter than me certainly believe they will—but I’m not changing my weekend plans to build a bunker.
The AI industry now has a massive Chicken Little problem, which makes it hard to take dire warnings like this too seriously. As I’ve written before, when an AI entrepreneur tells you that AI is a world‑changing technology on the order of COVID‑19 or the agricultural revolution, you have to treat the message for what it really is—a sales pitch.
Why People Are So Worried About AI Right Now
Shumer’s essay claims that the latest generative‑AI models from OpenAI and Anthropic are already capable of doing much of his job.
“Here’s the thing nobody outside of tech quite understands yet: the reason so many people in the industry are sounding the alarm right now is because this already happened to us. We’re not making predictions. We’re telling you what already occurred in our own jobs, and warning you that you’re next.”
The post struck a nerve on X (formerly Twitter). Across the political spectrum, high‑profile accounts with millions of followers have been sharing it as an urgent warning.
Note: The original tweets are no longer available.
Understanding the Big Concepts
Artificial General Intelligence (AGI)
AGI is a hypothetical AI system that possesses human‑like intelligence and can perform any intellectual task that a human can. See the Google Cloud definition of AGI.
The Singularity
The Singularity refers to a future point at which technology becomes self‑improving, leading to exponential growth in capability.
Progress Toward AGI and the Singularity
- OpenAI’s latest coding model – GPT‑5.3‑Codex – reportedly helped create itself. (Mashable article)
- Anthropic has made similar claims about its newest product launches.
- Generative AI is now so proficient at writing code that it has “decimated the job market for entry‑level coders.” (NY Times report, Aug 10 2025)
These developments give many reason to believe we are moving closer to both AGI and the Singularity.
Implications and Skepticism
- Rapid advancement: Generative AI is progressing quickly and will have major impacts on everyday life, the labor market, and the future.
- Cautionary perspective: It’s easy to dismiss alarmist warnings as “Chicken‑Little” predictions, just as one might be skeptical of a car salesman’s hype about a new convertible.
- Skeptics weigh in: As Shumer’s post went viral, AI skeptics joined the conversation, offering counter‑arguments and pointing out potential over‑statements.
Note: The skeptical tweets referenced are also no longer available.
Bottom Line
While the excitement (and anxiety) surrounding AI is understandable, it’s essential to separate substantiated progress from hyperbole. Keeping a critical eye on both the technology’s capabilities and the narratives built around it will help us navigate the coming changes more responsibly.
It’s Not Time to Panic Yet
There are many reasons to be skeptical of Shumer’s claims. In his essay he cites two specific examples of generative‑AI capabilities:
- Legal reasoning on par with top lawyers.
- Creating, testing, and debugging apps without human intervention.
Below we examine each claim.
The App Argument
“I’ll tell the AI: ‘I want to build this app. Here’s what it should do, here’s roughly what it should look like. Figure out the user flow, the design, all of it.’
It writes tens of thousands of lines of code, then opens the app itself, clicks through the buttons, tests the features, and uses the app the way a person would. If something doesn’t look or feel right, it goes back and changes it on its own, iterating like a developer until it is satisfied. Only once it decides the app meets its own standards does it say, ‘It’s ready for you to test.’ When I test it, it’s usually perfect.”
That was my Monday.
Is this impressive? Absolutely.
But the tech world already jokes that “there’s an app for that.” Existing codebases give models a massive corpus to emulate. The real question is whether the ability to create new apps faster will irrevocably change the world, or simply accelerate an already saturated market.
The Legal Claim
Shumer describes AI as “like having a team of lawyers available instantly.” The problem is that lawyers are already being censured for using AI.
- [Some judges move beyond fines, keep lawyers’ AI errors in check – Reuters](https://www.reuters.com/legal/government/some-judges-move-beyond-fines-keep-lawyers-ai-errors-check-2025-09-16/)
- A lawyer tracking AI hallucinations in the legal profession has documented 912 cases so far: Damien Charlotin’s Hallucination Tracker.
Even the most advanced LLMs struggle with fact‑checking. According to OpenAI’s own documentation, its latest model GPT‑5.2 has a 10.9 % hallucination rate (and 5.8 % even when given internet access). Would you trust a person who hallucinates six percent of the time?
Why a Rapid Leap May Not Happen
A rapid breakthrough is possible, but there are signs of diminishing returns:
- Ads in ChatGPT – OpenAI introduced advertising, a “last‑resort” tactic it previously dismissed. (Mashable)
- “ChatGPT Adult” mode – A new option for erotic role‑play, suggesting a shift toward monetization over safety. (Wall Street Journal)
These moves don’t inspire confidence that a company is on the brink of unleashing super‑intelligence.
This tweet is currently unavailable. It might be loading or has been removed.
This article reflects the opinion of the author.
Disclosure: Ziff Davis, Mashable’s parent company, filed a lawsuit against OpenAI in April 2025, alleging copyright infringement in training and operating its AI systems.
About the Author

Timothy Beck Werth – Tech Editor, Mashable
Tim leads coverage for the Tech and Shopping verticals. He has 15 + years of journalism experience, covering consumer tech, smart‑home gadgets, and men’s grooming. Formerly Managing Editor and Site Director of SPY.com, he has also written for GQ, The Daily Beast, Gear Patrol, and The Awl.
- Education: Print journalism, University of Southern California
- Location: Splits time between Brooklyn, NY and Charleston, SC
- Current project: Working on his second novel, a science‑fiction book

These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16 + and agree to our Terms of Use and Privacy Policy.