The strange comfort of vibe coding (and when it backfires)

Published: (January 1, 2026 at 04:01 PM EST)
9 min read
Source: Dev.to

Source: Dev.to

The Birth of Vibe Coding

We probably all know this by now, but let’s set some ground rules anyway so we’re on the same page moving forward:

“Vibe coding” is a term crafted by Andrey Karpathy, who describes a phenomenon where someone takes an LLM and goes through the entire process of creating a piece of software (a landing page, a web app, or a mobile app) while relying entirely on AI. You describe the expected outcome and, from that point on, you mostly just hit Accept as if your life depended on it.
If there’s a bug, you paste it back into the LLM to fix it. You keep iterating until you end up with something that looks satisfactory to your eyes… or do you really?

Starting with Fundamentals

If we take a magnifying glass and zoom in on what’s (roughly) happening under the hood, we quickly find that it’s all about probability and statistics. When you shoot a question at an LLM, what you get back is an educated guess – but still a guess.

  • The model has been pre‑trained on a (usually massive) corpus of data.
  • It tries to predict the response that’s most likely to appear next, token by token.

The fundamental flaw is that you might end up with five different responses to the same prompt. Even small details—like spelling mistakes—can influence the output.

Is it a bad thing? No, it’s not.
What is bad is pretending this semi‑randomness doesn’t exist, or acting as if it doesn’t matter.

So how do we deal with this uncertainty? The common answer is: “skill issue.”

Same Output, Different Perspectives

Given this inherent randomness, how do we ensure quality?

Prompting is a skill issue. If you’re getting chaos back, you probably need to craft better prompts. Garbage in = garbage out. Duh.

Let’s assume that’s true and move the discussion a step further.

  1. You’ve crafted the perfect prompt.
  2. You’ve consulted five different LLMs to review and refine it.
  3. It’s finally ready to be fed into your Claude Code.

My question: How do you know the outcome is sane?

Do we treat seeing what we expected on the screen as sufficient validation of the result?

This is where my biggest problem with vibe coding shows up.

  • If you’re a Software Engineer (a pre‑AI‑era dinosaur) you’ve probably seen hundreds of thousands of lines of code in your life. You’ve crafted thousands of them yourself. You’ve tried out different technologies, explored PoCs, prepared R&Ds for various features, all across the lifespan of your company’s top‑selling SaaS product.

You’ve seen a lot. That gives you the edge and the necessary perspective to:

  • calmly critique generated code,
  • challenge it,
  • “Sith‑sense” the gaps,
  • ask for alternatives.

If the model produces five different outputs for you — that’s great. It’s like raising a PR and having five engineers review it from different angles. You get to decide what works best.

You’re presented with options, and by synthesizing years of experience and hard‑earned lessons, you make a call – and then live with its consequences.

But what if all of that is missing?

Yay, It Works – Dopamine Kicks In

The major part of vibe coding’s appeal is how easy it has become to generate results that look and feel the way you imagined. The phrase “democratizing code” hits home for a lot of people, and the raw joy of seeing something from your imagination appear on your MacBook’s screen is no longer exclusive to programmers.

It sparks excitement. You want more of it. You’ve done it.

The problem is — it wasn’t really you.

That landing page loads fast, looks great, and the animations are stunning. The buttons do what they’re supposed to do. Ship it, right?

A month later you need to:

  • add a new feature,
  • fix a bug that only shows up under certain conditions, or
  • explain to a new developer why the authentication flow works the way it does.

Suddenly you’re staring at code that might as well be written in ancient Sumerian.

Sure, you can ask the LLM to fix it. But now you’re playing Jenga again — except this time with production data and real customers.

If you don’t understand the underlying principles, how can you tell whether the code is sane beyond the fact that it “feels kinda right”?

You might argue: “I don’t need to know plumbing to hire a plumber.” Fair enough. But would you hire the first contractor that pops up on Google, or would you vet them? Talk to people who’ve used them? Check their previous work?

That brings us back to square one. You still need a human in the loop — someone who knows what they’re doing, or at least someone who’s already walked the path you’re about to take.

There’s also the accountability angle. If a contractor misplaces the pipes and your ceiling starts leaking on a cold December day, you have options. You have recourse. You have the right to expect a system that’s properly designed, stable, and auditable.

With vibe coding? You don’t.

  • You have no meaningful way to audit or reason about failure modes.
  • OpenAI’s and Anthropic’s policies explicitly state they’re not liable for generated code.
  • You’re on your own.

What happens when you leak customer data because of a vibe‑coded security hole? What happens when your product crashes mid‑day for reasons you can’t debug, causing losses for your clients?

There’s no contractor to call. There’s just you, a pile of generated code, and a problem you don’t have the foundation to solve.

That’s when you start noticing the cracks.

The Wibbly Jenga Tower

I’ve tried vibe coding a bunch of times on a variety of projects:

  • Some were frontend‑heavy.
  • Some explored a new technology.
  • Others were attempts to learn a new framework.

(Continue the narrative as needed…)

The “Wibbly Jinja” Tower

Think about playing Jenga with your friends. At the start, everything is stable – just a clean cuboid, standing still, perfectly balanced. As time goes on and the pieces are moved around, you become more cautious. You start playing catch‑up, trying to identify which blocks are safe to touch and which absolutely are not.

Eventually you reach a point where the tower becomes super wobbly. At that stage you want to touch it as little as possible – ideally you’d rather pass your turn. You’re no longer enjoying the game, because you know one wrong move will make everything collapse.

And when it does, the game is over. You start again from scratch – vibe‑coding the next iteration of your wibbly Jenga tower.

The vibe‑coding sweet spot

So, is vibe‑coding doomed? Should we abandon it entirely?

Not quite.

The problem isn’t vibe‑coding itself – it’s treating it as a replacement for understanding instead of a tool for acceleration. I think it excels in a few specific scenarios (and probably more):

1. You need an MVP to showcase an idea to clients, investors, or a community

As a way to validate assumptions or test product‑market fit, vibe‑coding can be incredibly effective. You have an idea and want to show it to people as quickly as possible. That’s completely reasonable, and in this context AI really can feel like a 10× multiplier.

One caveat: you shouldn’t take this straight into production. The reasons should be clear by now.

2. You need quick R&D to prove whether something is even feasible

This won’t always give you a definitive answer. AI can generate code that appears to work but falls apart under real‑world load or edge cases. Sometimes that failure reveals fundamental limitations in the idea itself.

Still, as a fast way to check “this looks roughly like what I want, so it probably can be built,” it can be extremely useful — especially when paired with an experienced engineer.

3. You want to see a new framework, language, or technology in action

I wouldn’t argue that learning through vibe‑coding is a great idea on its own, as it flattens one of the most important parts of the learning curve: active writing. As in life — and even more so in programming — reading and understanding is not the same as writing and understanding.

That being said, AI can help you surface relevant bits of documentation faster (through tools like Context7) or show “real‑world” examples of certain concepts in action. It can make the learning more engaging and fun, as long as you don’t hand over the steering wheel and let AI navigate every difficult turn and stormy patch for you.

Which brings us to the bigger question

What does the future hold for us, engineers?

Is vibe‑coding here to stay? Probably.

The real question isn’t whether the technology will stick around — it’s whether you will still be relevant despite that.

How I see it playing out

  • People who understand the fundamentals will use AI to move faster, build better, and explore further. They’ll critique generated code, spot the nonsense, and pack the good stuff into their mental models. They’ll keep building on the solid ground they’ve developed over the years.

  • People who treat AI as a black box that magically solves problems will build faster in the short term. They’ll ship MVPs, get dopamine hits, and maybe even land some clients. But the moment they need to scale beyond the toy example—or, worse, something breaks in a way the LLM can’t immediately fix (or falls into the endless loop: fix bug A → introduce bug B → fix bug B → re‑introduce bug A…)—they’re stuck.

They optimized for speed without building the foundation to sustain the momentum.

The real divide isn’t “AI enthusiasts vs. purists.” It’s people who can tell good code from bad code vs. people who can’t. Vibe‑coding doesn’t just fail to teach you that skill — it actively obscures whether you have it or not.

So where does that leave us?

Use vibe‑coding for what it’s good at: rapid prototyping, exploring new technologies, validating ideas, and iterating quickly on feedback. But treat it like training wheels, not as a replacement (or a permission) for learning how to ride.

Because the moment those wheels come off, you’ll find out very quickly whether you actually know what you’re doing.

Are we optimizing for short‑term dopamine hits or for long‑term mastery of the craft?

Back to Blog

Related posts

Read more »

The RGB LED Sidequest 💡

markdown !Jennifer Davishttps://media2.dev.to/dynamic/image/width=50,height=50,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%...

Mendex: Why I Build

Introduction Hello everyone. Today I want to share who I am, what I'm building, and why. Early Career and Burnout I started my career as a developer 17 years a...