AI 도구에서 더 나은 결과 얻는 방법 (시간과 토큰을 낭비하지 않고)
Source: Dev.to
1. Be specific upfront
Vague prompts = vague answers.
Bad example
Write a function to handle errors.
Good example
# Write a Python FastAPI middleware that catches async errors
# and returns a structured JSON response with status code and message.
2. Use constraints
Tell the AI what not to do.
No comments. No print statements. Use async/await with httpx, not requests.
Constraints cut bloat before it’s even generated.
3. Give an example
Point the model to your existing code and say “match this style.”
Whether you’re using Claude Code, Cursor, GitHub Copilot, or a browser‑based AI, letting it read a snippet of your codebase aligns the output with your naming conventions, patterns, and architecture—no lengthy explanation needed.
4. Assign a role
You are a senior backend engineer reviewing this API design for scalability issues.
It steers the reasoning frame and yields a sharper, more focused review.
5. Break complex tasks apart
Don’t ask the AI to “build a full auth system” in one prompt.
Instead, split it into steps such as:
- Models
- Routes
- Decorators / dependencies
- Pytest tests
Each step builds on the last, making errors easier to catch.
6. Refine, don’t regenerate
If something’s off, don’t restart. Say:
This Python function is returning None instead of the parsed JSON.
Debug just this function; don’t touch the rest.
Targeted edits save tokens and preserve what’s already working.
7. Control output length
Give me 3 approaches to this caching problem, one paragraph each.
Longer output ≠ better output; it just takes more time to read and review.
8. Know when AI can mislead you
Designing system architecture, making security‑critical decisions, or estimating performance at scale are areas where AI can sound confident yet be completely wrong. Always validate its output with your own judgment and domain knowledge.
Core principle
AI won’t fix a bad brief. The quality of your output is directly proportional to the clarity of your input.