The AI Bubble: Why I’m Getting Out Before 2026

Published: (December 21, 2025 at 07:14 PM EST)
4 min read
Source: Dev.to

Source: Dev.to

Cover image for The AI Bubble: Why I’m Getting Out Before 2026

Introduction

The AI is here, and by passing the 2025 I faced, everyone is realizing that AI is good at completing tasks such as reading emails, organizing them, writing letters, and generating images. I have created the pictures for my blog using AI, and they are impressive. If I think about it, it is good to maintain my pattern.

So, why am I running away from AI in 2026 if the hype is still there, the race to acquire the best AI is still ongoing, and models are continually improving? At some point it becomes counter‑intuitive, especially given the news that AGI could possibly arrive by 2030.

Everything started with the illusion of thinking. I read that paper, and something resonated with me. Then I read The Illusion of the Illusion of Thinking, which takes the opposite stance, explaining why Apple papers feel off and what they did wrong in their research. Both papers are extraordinary in understanding the key point of AI.

The real truth about the system

After reading the book The Art of Doing Science and Engineering: Learning to Learn and building an LLM prototype myself, I realized the pattern is still clear: we have super‑models that basically run from the same pattern, connecting dots based on prompts and generating the most credible idea based on the most probable answer a user is looking for.

So that is why, if I ask any LLM:

What is the command to display where my folders are mounted

I will get an answer like the one below:

prompt answer

Which is correct and saves me the time to Google it. There is, however, a paradox: it creates the illusion that I know how to use the terminal—perhaps I do—but it may also create the illusion that I understand something I do not truly comprehend. I am not saying we should memorize every command.

Imagine an application built by someone who does not know how to code. Translating software‑engineer requirements into code is not just about writing lines; it’s a synergy of modeling, understanding, and execution.

The path is missing out

At this point it is clear why I’m moving away from LLMs as a crutch. I feel I need to buy something to be super fast and feel productive by just paying to be smart. We are buying the power of thinking, or at least that’s how it seems.

It feels similar to video‑game culture 20 years ago and today: you buy the best skin instead of demonstrating skill. We end up showing who is wealthier rather than who is more capable.

tweet

The feeling of being extraordinary without being extraordinary feels real, but we must remember that these products are maintained by companies that need to generate revenue. We should think about how to utilize new technology to maximize its benefits for our own advantage.

What’s going to be my 2026

I won’t stop using AI tools altogether, but I won’t spend excessive money on learning and practicing with them. After building my own LLM, I felt I could understand much better how AI works and cut through the hype.

I will follow my own pattern of learning. I will still use AI to help code when I know the language, but I will aim to write the code myself. If I forget a command, I can ask an LLM or look it up on Google, then note it down in a notebook to reinforce learning.

When I need to write something valuable or convey an idea, I will draft it myself first and then use an LLM only to correct grammar and consistency, rather than relying on it to generate the whole piece. This approach keeps me engaged with the material while still benefiting from AI assistance.

Back to Blog

Related posts

Read more »