I've Started Using Dumber Models on Purpose

Published: (March 19, 2026 at 03:45 PM EDT)
3 min read
Source: Dev.to

Source: Dev.to

Why I Use Less Capable Models

Here’s something that felt wrong at first: I’ve started reaching for less capable models when I’m writing code.
Not because they’re cheaper, but because they make me think.

Opus 4.5 will take a half‑baked prompt and ship working code. You describe a vague idea, and 30 seconds later you’ve got something that compiles. Magic, right?
Except… did you actually think about what you were building?

The risk with ultra‑capable models isn’t wrong code—it’s skipping the part where you understand the problem. You get a solution before you’ve defined what you’re solving. I found myself in meetings defending decisions I hadn’t consciously made. “Why did you structure it this way?” – “Uh, because the model did, and it worked?” That’s a problem.

Sonnet makes you think first. When a model requires precision in your prompts, you’re forced to actually articulate what you want. That articulation is the architecture work. If you can’t explain it clearly enough for a mid‑tier model to execute, maybe you don’t understand it well enough yet. This isn’t about the model being bad; it’s about the model being appropriately demanding.

How I Use Different Models

Exploration and Research

  • Model: Opus 4.5
  • Purpose: Understanding complex codebases, exploring possibilities, asking “what if” questions, and letting the model synthesize ideas.

First‑Draft Implementation

  • Model: Sonnet
  • Purpose: Forces me to write actual specs. If the prompt has to be precise, the thinking has already happened.

Code Review and Debugging

  • Model: Back to powerful models (e.g., Opus 4.5)
  • Purpose: Catch things I miss, suggest better patterns, and explain why something is wrong—not just that it is.

Refactoring

  • Model: Sonnet again
  • Purpose: If I can’t describe the refactor clearly, I’m not ready to do it.

Pattern: Powerful models for exploration and review, constrained models for creation.

The Core Insight

If your prompt could substitute for a design doc, you’ve done the thinking. If your prompt is simply “make it work,” you haven’t. Ultra‑capable models let you skip writing that design doc, which is exactly why you shouldn’t always use them.

Prompt Discipline

I’ve started treating prompts like commit messages—they should explain the why, not just the what.

  • “Add user authentication” is insufficient.
  • You need to specify what kind of auth, where it lives, the session strategy, etc.

Answering those questions in the prompt forces you to answer them in your head first. The tool that makes you think less isn’t always the better tool.

When I reach for the most powerful model, I’m implicitly saying “I don’t need to think about this.” Sometimes that’s true—rote work or a genuine need for extra capability. But for design decisions, architecture, or anything you’ll need to explain to another human, the friction is a feature. The struggle to articulate is the work.

Invitation

I’m curious—has anyone else deliberately used less capable tools for certain tasks, not for cost or speed, but because the constraint improves the output?

Originally published at

0 views
Back to Blog

Related posts

Read more »