AI Sycophancy: Is AI Too Nice?

Published: (December 21, 2025 at 08:41 PM EST)
2 min read
Source: Dev.to

Source: Dev.to

Overview

AI tools are incredibly helpful — and sometimes that’s the problem.
Large language models tend to agree with you. They validate your approach, confirm your assumptions, and tell you your code “looks good.” That confidence boost can feel earned, even when it isn’t. As engineers, we should be cautious of that.

I use tools like Cursor, Gemini, and Copilot every day. They’ve absolutely increased my productivity, but I’ve noticed a consistent pattern: getting high‑quality output often takes multiple attempts. The first response is usually fine; rarely is it critical.

That’s not because the model is bad. It’s because it’s doing exactly what it was trained to do: be helpful. And “helpful” often means agreeable.

Why This Matters

If you ask an AI model to review code in a vague way, you’ll usually get a vague review—polite suggestions, nothing that seriously challenges your implementation.

Generic prompt

Can you review this code for bugs?

You’ll get something that sounds reasonable but likely misses deeper issues such as security assumptions, error‑handling gaps, or production risks.

Improved prompt

Act as a strict senior software engineer. Review this code as if it will run in production and handle sensitive data. Focus on security issues, poor error handling, and unsafe assumptions. Call out anything that could cause failures and suggest concrete fixes.

The difference in output quality is usually immediate.

What Changed?

  • Clear role – “strict senior engineer”
  • Defined scope – security, error handling, production risk
  • Explicit request for pushback – not just validation
  • Actionable feedback – concrete fixes

These changes matter because AI models are optimized to agree unless you give them permission—and direction—to challenge you.

The Real Takeaway

The problem isn’t that AI is “too dumb” or that we need better models. The problem is that vague prompts turn AI into a yes‑man.

If you want value, don’t ask AI to review your work. Ask it to try to break it—just like a Quality Assurance Engineer’s job is to try and break the software before approving an implementation.

AI works best when you stop asking it to be nice and start asking it to be honest.

Back to Blog

Related posts

Read more »