AI Coding Assistants: Helpful or Harmful?

Published: (January 19, 2026 at 05:09 AM EST)
5 min read
Source: Dev.to

Source: Dev.to

Denis Tsyplakov on the “Dark Side” of AI Coding Agents

Denis Tsyplakov, Solutions Architect at DataArt, explores the less‑discussed side of AI coding assistants. While they can boost productivity, they also introduce risks that are easy to underestimate.

In a short experiment Denis asked an AI code assistant to solve a simple task. The result was telling: without strong coding skills and a solid grasp of system architecture, AI‑generated code can quickly become over‑complicated, inefficient, and hard to maintain.

Mixed Feelings About AI Coding Assistants

  • Some think they’re revolutionary.
  • Others don’t trust them at all.
  • Most engineers fall somewhere in between: cautious but curious.

Success stories rarely help. Claims like “My 5‑year‑old built this in 15 minutes” are often dismissed as marketing exaggeration. This skepticism slows adoption, but it also highlights an important point: both the benefits and the limits of these tools need a realistic understanding.

Market Pressure on Vendors

Reputable vendors are forced to compete with hype‑driven sellers, often leading to:

  • Drop in quality – products ship with bugs or unstable features.
  • Development decisions driven by hype, not user needs.
  • Unpredictable roadmaps – what works today may break tomorrow.

Experiment: How Deep Does AI Coding Go?

I ran a small experiment using three AI code assistants:

  1. GitHub Copilot
  2. JetBrains Junie
  3. Windsurf

The Task

A simple interview‑style problem that checks a candidate’s ability to elaborate on tech architecture.
For a senior engineer the correct approach usually takes 3–5 seconds to produce a solution. We’ve tested this repeatedly; the result is always instant. (We’ll create another task for candidates after this article is published.)

  • Copilot‑like tools are historically strong at algorithmic tasks.
  • When you ask them to create an implementation of a simple class with well‑defined, documented methods, you can expect a very good result.
  • The problem starts when architectural decisions are required – i.e., how exactly it should be implemented.

Junie, Copilot, and Windsurf showed similar results. Below is a step‑by‑step breakdown for the Junie prompting.

Prompt 1 – “Implement class logic”

The result would not pass a code review.
The logic was unnecessarily complex for the given task, but it is generally acceptable.
Assume I don’t have skills in Java tech architecture and accept this solution.

Prompt 2 – “Make this thread‑safe”

The assistant produced a technically correct solution.
Still, the task itself was trivial.

Prompt 3 – “Implement List getAllLabelsSorted() that should return all labels sorted by proximity to point [0,0]

  • This is where things started to unravel.
  • The code could be less wordy.
  • LLMs excel at algorithmic tasks, but not for a good reason: it unpacks a long into two ints and sorts them each time the method is called.
  • I would expect it to use a TreeMap, simply because it stores all sorted entries and gives us O(log n) complexity for both inserts and lookups.

Prompt 4 – “I do not want to re‑sort labels each time the method is called”

OMG!!! Cache!!! What could be worse?!

From there I tried multiple prompts, aiming for a canonical solution with a TreeMap‑like structure and a record with a comparator (without mentioning TreeMap directly, let’s assume I am not familiar with it).

Result: No luck. The more I asked, the hairier the solution became. I ended up with three screens of hardly readable code.

Desired solution (simplified)

  • Uses specific classes.
  • Is thread‑safe.
  • Does not store excessive data.

Yes, this approach is opinionated. It has O(log n) complexity – exactly what I wanted.

Key insight: I can get this code from AI only if I already know at least 50 % of the solution and can explain it in technical terms. If you start using an AI agent without a clear understanding of the desired result, the output becomes effectively random.

Can AI Agents Be Instructed to Use the Right Technical Architecture?

  • You can tell them to use records, for instance, but you cannot teach them common sense.
  • You can create a project.rules.md file that covers specific rules, but you cannot reuse it as a universal solution for every project.

The Biggest Problem: Supportability

  • The code might work, but its quality is often questionable.
  • Code that’s hard to support is also hard to change – a serious issue for production environments that need frequent updates.

Some people expect future tools to generate code from requirements alone, but that’s still a long way off. For now, supportability is what matters.

When AI Coding Assistants Turn Your Code Into an Unreadable Mess

ReasonConsequence
Instructions are vagueGenerates irrelevant or overly complex code
Results aren’t checkedBugs and architectural flaws slip through
Prompts aren’t fine‑tunedOutput becomes random and hard to maintain

That doesn’t mean you shouldn’t use AI.
It just means you need to review every line of generated code, which requires strong code‑reading skills. The problem is that many developers lack this experience.

How Much Faster Can AI‑Assisted Coding Be?

  • Depending on the language and framework, it can be up to 10–20× faster.
  • You still need to read and review the code.

Where AI Assistants Shine

  • Stable, traditional, and compliant code in languages with strong structure (e.g., Java, C#, TypeScript).

Where They Struggle

  • Codebases without strong compilation or verification.
  • Parts of the software development lifecycle such as code review, where AI‑generated code often breaks.

Practical Takeaways

  1. Know what you’re building before you ask an AI for code.
  2. Be familiar with current best practices (e.g., not Java 11, not Angular 12).
  3. Read the code you receive – treat AI output as a draft, not production‑ready code.
  4. Use AI assistants for writing boilerplate or exploratory prototyping, but do not rely on them for code review or final implementation.

In my opinion, assistants are already useful for writing code, but they are not ready to replace code review. That may change in the future, but not today.

nytime soon.

Having all of these challenges in mind, here’s what you should focus on:

  • Start using AI assistants where it makes sense.
  • If not in your main project, experiment elsewhere to stay relevant.
  • Review your language specifications thoroughly.
  • Improve technical architecture skills through practice.

Used thoughtfully, AI can speed you up. Used blindly, it will slow you down later.

Back to Blog

Related posts

Read more »

Top 5 CLI Coding Agents in 2026

Introduction The command line has always been home turf for developers who value speed, clarity, and control. By 2026, AI has settled comfortably into that spa...