Stop Explaining Bugs to AI - Show It the Bug

Published: (March 4, 2026 at 01:11 AM EST)
2 min read
Source: Dev.to

Source: Dev.to

Why explaining bugs to AI can be painful

AI excels at code‑level reasoning, but giving it enough context about a bug often requires long, ambiguous explanations. The AI may guess incorrectly, especially for UI‑related issues that involve timing, state, and sequence.

Show the bug instead

When the AI can’t reliably drive your UI, let it see the problem:

  1. Add targeted logs around the action path.
  2. Reproduce the bug manually.
  3. Write the logs to a file.
  4. Let the AI read the file and debug from concrete facts.

This approach has helped me fix many bugs quickly.

Prompt example

There is a bug with xxx feature where yyy happens.
1. Add logs in that area and log to a local file.
2. Ask me to reproduce the bug a few times.
3. Read the logs and find the bug.
4. Fix the bug, then I'll test again.

Typical UI issues

UI bugs are often timing + state + sequence problems, such as:

  • “Clicked save twice in 400 ms”
  • “Request B completed before Request A”
  • “State changed after unmount”
  • “Token refreshed mid‑flight”

These scenarios are hard to describe in plain English but become obvious in logs.

Benefits of providing a real event trail

When you give the AI an actual log trace, it can:

  • Reconstruct the execution path
  • Identify ordering or race conditions
  • Map symptoms to likely code paths
  • Suggest fixes grounded in real evidence

Tooling support

With MCP‑based tooling, agents can be given browser‑automation abilities, making UI‑driving possible. However, even with advanced tools, feeding logs is often the fastest way to tackle tricky bugs.

If you’re using Aspire, it includes a built‑in MCP server that provides agents with rich app context:

https://aspire.dev/get-started/configure-mcp/

aspire mcp init

This command configures MCP integration for supported assistants.

0 views
Back to Blog

Related posts

Read more »