AI 코딩 어시스턴트에게 실제로 위임하는 방법 (대부분 사람들은 안 함)

발행: (2026년 2월 23일 오전 04:03 GMT+9)
3 분 소요
원문: Dev.to

Source: Dev.to

Six months ago I was using AI coding tools wrong. I treated them like a search engine — ask a question, get an answer, move on. The output was fine but the productivity gain was minimal, maybe 10 % faster. Definitely not the revolution everyone was talking about.

The shift happened when I stopped asking AI to answer questions and started asking it to own tasks.

The difference between asking and delegating

Asking:
"How do I write a rate limiter in Express?"

Delegating:

I need a rate limiter for our auth endpoints. We're using Redis, the window is 15 minutes, limit is 5 attempts per IP. We have existing middleware in /middleware/auth.js — match that pattern. Return a 429 with a JSON body that includes retry_after. Write the implementation and the tests.

The first prompt gets you a generic code snippet. The second prompt gets you something close to mergeable code.

Key ingredients in a real delegation

  • Specific inputs and outputs – not “write a function” but “write a function that takes X, returns Y, handles Z edge case.”
  • Existing patterns to match – point it at real files in your codebase.
  • What “done” looks like – if you want tests, say so; if you want error handling, describe the format.

What you still can’t delegate

  • Architecture decisions – AI can suggest an architecture, but it won’t push back on contradictory requirements. It lacks knowledge of your team’s deployment constraints, incident history, or legacy workarounds that only exist in your head.
  • Anything where the requirement is fuzzy – If you can’t write a clear spec, AI can’t execute against it. Vague prompts reveal vague thinking.
  • Code reviews on AI‑generated code – Asking AI to review its own output is circular. It will spot style issues, not logic errors. The logical review has to be you.

The workflow I actually use

For any non‑trivial feature:

  1. Write a 5‑sentence plain‑English spec. What does this do? What are the inputs and outputs? What are the failure cases?
  2. Feed that spec to the AI with context (relevant files, patterns to match).
  3. Read the output like a code review, not like a gift.
  4. Ask the AI to fix specific issues — don’t ask for a full rewrite, just for corrections.
  5. Write the integration test yourself (it’s fast and forces you to think about the contract).

I’ve stopped using AI altogether for anything touching billing, auth flows, and migrations. The cost of a mistake is too high, and the review burden is the same as writing it myself.

The uncomfortable math

If AI handles 60 % of your typing but requires 80 % of your usual review time, the net gain is smaller than the hype suggests. The real leverage is in tasks where:

  • The spec is clear
  • The stakes are low
  • The pattern already exists somewhere in the codebase

That’s still a lot of work, but it’s a specific list, not a general “AI handles everything.”

The developers who get the most from these tools are the ones who were already good at writing specs before AI existed. The skill transfers directly.

Building with AI tools every day. Most of what I share here is from trying things that didn’t work and figuring out why. If that’s useful, stick around.

0 조회
Back to Blog

관련 글

더 보기 »