Enabling AI Agents to Use a Real Debugger Instead of Logging
Source: Dev.to
JDB Debugger Skill for AI Agents
Every Java developer has been there. Something breaks, and the first instinct is to litter the code with
System.out.println(">>> HERE 1");
Then HERE 2. Then HERE 3 — value is: " + x. Rebuild. Rerun. Stare at the console. Repeat.
We’ve been doing this for decades. And now, so have our AI agents.
When you ask an AI coding assistant to debug a Java application, it almost always reaches for the same play‑book:
- Add logging statements
- Recompile
- Rerun
- Read the output
- Reason about what happened
It’s the println debugging loop, automated — but it’s still println debugging.
What if the agent could just… use a real debugger?
Every JDK installation since the beginning of time includes jdb — the Java Debugger. It’s a command‑line tool that lets you:
- Set breakpoints
- Step through code
- Inspect variables
- Catch exceptions
- Examine threads
jdb speaks the same JDWP protocol that IntelliJ and Eclipse use under the hood, and because it’s purely text‑based it’s a perfect tool for AI agents that operate through terminal commands.
The problem: no agent knows how to use it. Until now.
Anthropic’s Agent Skills framework
Anthropic’s Agent Skills framework lets you package instructions, scripts, and reference material into a structured directory that AI agents can load dynamically. The format is simple:
SKILL.md– a YAML front‑matter block followed by Markdown instructions- Optional helper scripts
- Optional reference docs
Think of a skill as a runbook that the agent reads just‑in‑time when it recognizes a relevant task. The key insight is progressive disclosure — the agent only loads the skill’s description at startup (~100 tokens) and pulls in the full instructions only when it decides the skill is needed.
Building a JDB skill
I decided to build one that teaches agents how to operate jdb. The entire skill was built in a single conversation session with GitHub Copilot CLI. The process was surprisingly natural — I described what I wanted, and we iterated through research, design, implementation, and testing together.
The conversation started with a simple prompt:
“Java (the JDK) has a Debugger CLI. Let’s build a skill so that AI agents can debug applications in real time.”
Copilot:
- Researched the Agent Skills specification
- Studied Anthropic’s public skills repository for patterns
- Read Oracle’s jdb documentation
…and then produced the complete skill — all within the same session.
Skill structure
jdb-debugger-skill/
├── SKILL.md # Core instructions for the agent
├── scripts/
│ ├── jdb-launch.sh # Launch a JVM under JDB
│ ├── jdb-attach.sh # Attach to a running JVM
│ ├── jdb-diagnostics.sh # Automated thread dumps
│ └── jdb-breakpoints.sh # Bulk‑load breakpoints from a file
└── references/
├── jdb-commands.md # Complete command reference
└── jdwp-options.md # JDWP agent configuration
SKILL.md – decision tree (borrowed from Anthropic’s web‑app‑testing skill)
User wants to debug Java app →
├─ App is already running with JDWP agent?
│ ├─ Yes → Attach: scripts/jdb-attach.sh --port
│ └─ No → Can you restart with JDWP?
│ ├─ Yes → Launch with: scripts/jdb-launch.sh
│ └─ No → Suggest adding JDWP agent to JVM flags
│
├─ What does the user need?
│ ├─ Set breakpoints & step through code → Interactive JDB session
│ ├─ Collect thread dumps / diagnostics → scripts/jdb-diagnostics.sh
│ └─ Catch a specific exception → Use `catch` command in JDB
After the tree, the skill provides concrete debugging workflow patterns — e.g., how to investigate a NullPointerException, how to watch a method’s behavior, how to diagnose a deadlock — written as step‑by‑step JDB command sequences the agent can follow.
Demonstration: a sample Swing application
We built a small Swing app with four intentional bugs:
| Bug | Description |
|---|---|
| NullPointerException | processMessage() returns null for empty input |
| Off‑by‑one error | The warning counter always shows one less than the actual value |
| NullPointerException after clear | warningHistory is set to null instead of calling .clear() |
| StringIndexOutOfBoundsException | text.substring(0, 3) on input shorter than 3 characters |
Debugging session (performed by the agent)
The agent launched the app under jdb, set exception catches and method breakpoints, then ran the application:
> catch java.lang.NullPointerException
> catch java.lang.StringIndexOutOfBoundsException
> stop in com.example.WarningApp.showWarning
> run
When I clicked “Show Warning” in the Swing UI, jdb immediately hit the breakpoint. The agent stepped through the code, inspecting variables at each step:
Breakpoint hit: "thread=AWT-EventQueue-0", com.example.WarningApp.showWarning(), line=80
80 String text = inputField.getText();
AWT-EventQueue-0[1] next
Step completed: line=83
83 String processed = processMessage(text);
AWT-EventQueue-0[1] print text
text = "bruno"
It then stepped into processMessage, verified the return value, and stepped back out:
AWT-EventQueue-0[1] step
Step completed: com.example.WarningApp.processMessage(), line=105
105 String trimmed = message.trim();
AWT-EventQueue-0[1] step up
Step completed: com.example.WarningApp.showWarning(), line=83
AWT-EventQueue-0[1] print processed
processed = "⚠ BRUNO ⚠"
Catching the off‑by‑one bug
The agent continued to the counter update and inspected the live state:
AWT-EventQueue-0[1] print warningCount
warningCount = 0
AWT-EventQueue-0[1] next
Step completed: line=93
93 counterLabel.setText("Warnings shown: " + (warningCount - 1));
AWT-EventQueue-0[1] print warningCount
warningCount = 1
Observation:
warningCountis1, but line 93 displayswarningCount - 1, which evaluates to0. The agent identified the bug by observing the program’s state at the exact line where the defect occurs — no additional logging required.
Takeaways
- jdb is a powerful, text‑based debugger that fits naturally into an AI‑agent workflow.
- By packaging usage instructions, helper scripts, and reference docs into an Agent Skill, we give agents the ability to debug Java applications in real time.
- The skill’s progressive disclosure keeps token usage low while still providing rich, on‑demand guidance.
- The demo shows that an agent can locate and explain bugs (including off‑by‑one errors) without resorting to noisy
printlnstatements.
Want to try it yourself?
-
Clone the repository:
git clone https://github.com/your‑org/jdb-debugger-skill.git -
Follow the instructions in
SKILL.mdto load the skill into your Anthropic‑compatible agent. -
Use the provided scripts (
jdb-launch.sh,jdb-attach.sh, …) to start debugging your own Java programs.
Happy debugging! 🚀
Logging, No Guessing, No Recompilation
-g
One Interesting Moment in the Session
The first time we tried locals, JDB responded:
Local variable information not available. Compile with -g to generate variable information
The agent immediately recognized the issue, quit JDB, re‑compiled with:
javac -g …
(which includes debug symbols), and relaunched.
This is exactly the kind of practical knowledge that a skill should encode — and that we later made sure to document in SKILL.md.
println Debugging
The standard AI debugging loop today looks like this:
- Read the code
- Add
System.out.printlnor logging statements - Recompile
- Run the program
- Read the output
- Reason about what happened
- Modify the code
- Repeat
What JDB Brings to the Table
With JDB, the agent can:
- Set breakpoints at suspicious locations
- Run the program
- Inspect the actual runtime state — variable values, call stacks, thread states
- Step through execution line‑by‑line
- Catch exceptions at the exact throw site
This is a fundamentally different approach. The agent observes the program’s behavior as it runs, rather than inferring it from log output after the fact.
Why It Works So Well
- JDB is text‑based – it reads commands from stdin and writes output to stdout, which matches how AI agents interact with tools.
- Agent Skills are just Markdown – no SDK, no API integration, no plugin framework. Write instructions in a
.mdfile and the agent follows them. - Helper scripts are black boxes – the agent runs something like
scripts/jdb-attach.sh --port 5005without needing to understand the script internals.
The skill follows the same “black‑box scripts” pattern used by Anthropic’s own web‑app‑testing skill, which invokes Playwright scripts without reading their source.
Dynamic vs. Static Information
Most AI coding tools today work with static information — source code, type signatures, documentation.
JDB gives agents access to dynamic information — what actually happens at runtime. This is especially valuable for:
- Concurrency bugs – thread dumps and deadlock detection via JDB’s thread commands.
- State‑dependent bugs – inspecting object fields and local variables at specific execution points.
- Exception investigation – catching exceptions at the throw site rather than reading stack traces later.
- Integration issues – attaching to running services to observe behavior with real data.
Open‑Source Skill
- Repository:
- Includes a sample Swing app with four intentional bugs, allowing you to reproduce the exact debugging session.
- Full conversation transcript is available as a GitHub Gist.
Getting Started
/skill add jdb-debugger
Then just ask:
“Debug my Java application — there’s a
NullPointerExceptionI can’t figure out.”
Future Extensions
- Conditional breakpoints and watchpoints for more surgical debugging.
- Integration with build tools – auto‑detecting Maven/Gradle projects and compiling with
-gbefore launching JDB. - Remote debugging recipes – patterns for Kubernetes pods, Docker containers, and cloud‑hosted JVMs.
- Composability with other skills – combining JDB debugging with code‑analysis or test‑generation skills.
Takeaway
Every command‑line tool that developers use daily is a potential agent skill. Debuggers, profilers, database CLIs, network tools — they’re all text‑based interfaces waiting to be taught to AI agents.
The JDK gave us the debugger thirty years ago. We just needed to write the instructions.