Most People Use AI Like Google. That's Why It Sucks.

Published: (April 28, 2026 at 10:12 AM EDT)
3 min read
Source: Dev.to

Source: Dev.to

The Initial Experience

For the first month I treated Copilot like a junior engineer with ambition and no guardrails. It would write code I never asked for. Mid‑function, Copilot would leap ahead—creating new methods, suggesting whole classes, trying to take initiative like a junior who wants to prove themselves.

The Breaking Point

During a refactor Copilot output code that ignored our style guide, created variables named R and T, duplicated logic I had already abstracted, and repeated the same pattern three times instead of recognizing the abstraction. The tests broke, and it didn’t care—I hadn’t taught it to care. I spent two hours cleaning up a mess that should have taken twenty minutes to write correctly.

Shifting the Approach

I realized I was prompting, getting overreaching output, and then pruning—spending more time editing AI output than writing it myself. The issue wasn’t the AI; it was my expectation of answers instead of defining a system.

I stopped trying to re‑prompt my way out of bad output and changed the “brain” the AI pulled from. I created markdown files with our engineering standards: how we write requirements, the questions we ask before scoping, the difference between a user story and a task, how we think about trade‑offs, when we prefer duplication over abstraction, and what “clean” means in our codebase.

Encoding Standards

When an agent generated code after that, it didn’t improvise; it executed patterns we had already defined. I wasn’t prompting a junior engineer anymore—I was orchestrating a senior engineer. Senior engineers write better code because they recognize patterns, know when to abstract, and understand cultural constraints that aren’t written down. You can’t prompt that into an AI; you have to encode it.

We built these standards into skills, rules, agents, and hooks. We have markdown files for our BA persona, solution architect, code‑review process, and QA. When an agent generates a spec or writes code, it references these files—executing defined patterns rather than improvising.

Scaling Through Systems Thinking

The old constraint was headcount: “We don’t have enough people.” The new constraint is algorithm quality—how well we’ve encoded judgment and defined what good looks like. Principal engineers and architects are adopting AI faster and deeper than mid‑level developers. It’s not because they’re more technical; it’s because they already think in systems.

  • A mid‑level engineer treats AI like a pair programmer: prompt, review, accept, repeat.
  • An architect treats AI like infrastructure: define patterns, encode constraints, let the system execute.

Observations on Adoption

  • Principal engineers and architects have higher AI usage numbers.
  • They think in systems, allowing them to scale their thinking exponentially.
  • The cost isn’t the loss of craft; it’s learning a new skill—orchestration rather than coding or prompting.

Understanding how to chain agents, where to insert human judgment, and how to encode standards so they’re rigid enough to prevent violations yet flexible enough to surprise you is the new competency.

Rethinking AI Interaction

When engineers hit a wall with AI, do they re‑prompt or re‑architect? Do they try to steer the AI with better words, or change what the AI is pulling from? The first approach scales linearly (one prompt, one output, one edit). The second scales exponentially (define the system once, let it execute forever).

Most people use AI like Google because that’s what the interface suggests: type, accept, move on. It feels like progress, but progress isn’t the speed of retrieval—it’s leverage. Google gives you answers; a senior engineer gives you systems. Which one are you running?

0 views
Back to Blog

Related posts

Read more »