The Prompt Engineering Guide Nobody Needed Until AI Wrote 75% of Our Code

Published: (February 19, 2026 at 05:13 PM EST)
6 min read
Source: Dev.to

Source: Dev.to

AI‑Assisted Coding: A Prompting Framework that Works

Six months ago our team adopted AI coding tools. Productivity jumped 30 % the first week, but code quality dropped, bug reports rose, and PR review time doubled. The problem wasn’t the AI – it was how we were prompting it.

Below is the prompting framework we refined over six months of trial‑and‑error. It works with Cursor, GitHub Copilot, Claude Code, ChatGPT, and similar models.

1. Why Most Prompts Fail

Most developers talk to the model like they’re talking to a junior teammate:

“Build a login page.”

The model dutifully returns a page without error handling, accessibility, loading states, rate‑limiting, or modern styling – often using a deprecated library.
If the prompt is vague, the output is generic.

2. The Four‑Element Prompt Template

Every prompt should contain all of the following:

#ElementWhat to Include
1Context – what already exists? What is the tech stack? Which architectural patterns does the codebase use?
2Goal – the specific thing you want to accomplish.
3Non‑functional requirements – error handling, testing, performance, accessibility, security, etc.
4Negative constraints – what the AI must avoid (anti‑patterns, disallowed libraries, etc.).

3. Example Prompt (Positive + Negative)

Prompt

Add search to the products page

Full Prompt (using the template)

Situation:
- Next.js 14 app with App Router
- PostgreSQL via Prisma
- Tailwind CSS
- Existing products page at app/products/page.tsx (server component) that fetches all products

Goal:
- Implement search so users can type a query and see filtered results.
- Search must cover product name **and** description.

Requirements:
- Debounced input (300 ms) to avoid excessive queries.
- URL‑based search params (shareable/bookmarkable).
- Loading state while searching.
- “No results” state with a clear message.
- Minimum 3 characters before triggering a search.
- Graceful handling of Prisma errors (show error state, don’t crash).

Constraints:
- No client‑side filtering (dataset > 50 k products).
- No new dependency for search – use Prisma full‑text search.
- Do **not** use `useEffect` for data fetching (use server components + `searchParams`).
- Follow the existing code style (see other pages for reference).

Result: The output is production‑ready rather than a quick demo.

4. Negative Prompting – Tell the AI What Not to Do

“Implement the user registration endpoint.”
Do NOT:

  • Use any() types
  • Catch errors silently (every catch must log and re‑throw or return an error response)
  • Skip input validation
  • Return the password hash in the response
  • Use synchronous bcrypt (use the async version)
  • Create the database table (it already exists, see schema.prisma)

Because models are trained on millions of code snippets—many of which contain bad practices—the negative list steers them away from common pitfalls.

5. Show the Model Your Code Style

// ── example endpoint ──
// (paste an existing endpoint)

Following this exact pattern (error handling, response format, validation approach, naming convention), implement a new endpoint for .

Providing a concrete example is more reliable than describing the style in words; the model reverse‑engineers the patterns directly.

6. Build the Solution Incrementally

StepPrompt
1“Create the database query for searching products by name, with pagination. Only the query, not the API endpoint.”
2“Now wrap this in an API endpoint following our existing pattern. Add input validation with Zod.”
3“Add error handling. What happens if Prisma throws? What if the search param is empty?”
4“Write tests. Cover: successful search, empty results, invalid input, database error.”

Each step builds on the previous one, letting you verify correctness early. If Step 3 fails, you only redo Step 3—not the whole implementation.

7. Self‑Review Prompt

After the code is generated, ask the model to audit its own work:

Review the code you just generated. Check for:
1. Security vulnerabilities (injection, auth bypass, data exposure)
2. Error‑handling gaps (uncaught exceptions)
3. Edge cases (empty input, concurrent access, very large input)
4. Performance issues (N+1 queries, missing indexes, unnecessary computation)

List every issue you find with the specific line number.

This catches 60‑70 % of the issues AI introduces—a cheap first pass before a human review.

8. Explanation Prompt (When You’re Unsure)

Explain every line of this code. For each line:
1. What it does
2. Why it’s necessary
3. What would happen if it were removed

If any line is unnecessary, say so.

If the model can’t justify a line, that line is likely superfluous or risky.

9. Formal Rules (What We Enforced After 6 Months)

  1. Every line of AI‑generated code receives the same PR review as human‑written code.
    The AI is the author; the developer is the responsible party.

  2. If you can’t explain every line, don’t commit it. Use the Explanation Prompt.

  3. Architectural decisions come from the team, not the prompt. AI writes the code; humans decide what to write and why.

  4. Treat AI‑generated code as having different failure modes: syntactically perfect but semantically wrong. Tests are the safety net.

  5. Document complex generated code with the original prompt:

    // Generated with prompt:
    // "Implement rate limiting middleware using sliding‑window algorithm.
    //  Max 100 requests per minute per API key. Use Redis for storage.
    //  Return 429 with Retry‑After header."
    // Reviewed by: @jane on 2026‑02‑15

    This helps future developers understand intent and regenerate if needed.

10. Measured Impact

MetricBefore FrameworkAfter Framework
Code written+30 %+25 % (slightly less because prompts take time)
Bug rate+20 %–10 % (10 % fewer bugs than the pre‑AI baseline)
Net productivity gain~15 %~35 %
Prompt cost~60 seconds per prompt (worth it for the bug reduction)

TL;DR

  • Prompt with context, goal, non‑functional requirements, and an explicit “do not” list.
  • Show the model an example of your code style.
  • Iterate in small steps and ask the model to review its own output.
  • Require a line‑by‑line explanation when you don’t fully understand the code.
  • Treat AI‑generated code as any other contribution: full PR review, tests, and documentation.

Following this framework turned a modest 15 % productivity boost into a ~35 % gain while dramatically improving code quality. 🚀

Discussion Prompt

The math is clear.

How does your team use AI coding tools?
Have you developed team standards? I’d love to hear about approaches that work — and ones that failed.
Comments open.

AI Coding Prompt Templates Pack

If you want a ready‑made collection of prompt templates and cursor‑rules files, I’ve put together an AI Coding Prompt Templates Pack that includes:

  • 50+ prompt templates
  • 8 cursor‑rules files
  • Workflow templates for Claude Code and Copilot

Feel free to check it out and let me know what you think!

0 views
Back to Blog

Related posts

Read more »

Apex B. OpenClaw, Local Embeddings.

Local Embeddings para Private Memory Search Por default, el memory search de OpenClaw envía texto a un embedding API externo típicamente Anthropic u OpenAI par...