I Used 22 Prompts to Plan an Entire MuleSoft-to-.NET Migration. Here's the Playbook.
Source: Dev.to
Last week I sat down to migrate a MuleSoft integration project to .NET 10 Minimal APIs.
Instead of spending days writing migration specs, architecture docs, and agent‑team definitions manually, I paired with Claude to do it in a single conversation.
22 prompts. That’s all it took to go from “where do I start?” to a fully structured, audited, ready‑to‑execute migration toolkit – complete with scanner agents, phased prompts, integration patterns, and project scaffolding.
Below is the repeatable playbook any dev team can follow.
Phase 1 – Explore & Ground the AI in Your Codebase
Most developers start an AI conversation with a wall of text explaining everything upfront. Don’t.
-
Start broad. Let the AI propose an approach first, then steer it.
What is the best approach to convert MuleSoft to C# Minimal API? -
Claude returned a solid general strategy (Strangler Fig pattern, connector‑mapping tables, phased approach). Good foundation, but generic – it didn’t know my stack.
-
Point it at the real code:
Check my project on GitHub for the target architectureAfter a mis‑step (wrong repo) I corrected it:
That's not the right repo – try this URL instead
Key insight: The AI produces dramatically better output when it’s grounded in your real project, not a hypothetical one. Once it read my solution structure, EF Core setup, Polly config, and Azure AD auth, everything it generated afterward was contextually accurate.
Takeaway: Don’t explain your architecture in prose. Point the AI at your code and let it read.
Phase 2 – Request Deliverables That Match Your Tooling
Now that the AI understands the codebase, ask for output in the exact format you’ll use.
Can you create detailed agent‑team definitions I can use with Claude Code CLI?
Claude generated a full set of agent‑team roles, prompts, and coordination rules – tailored for Claude Code’s agent teams feature. While reviewing, I spotted a gap:
We're missing a step – we need to scan and inventory the source project before migrating anything
Because I hadn’t mentioned a Phase 0 scanner, the AI hadn’t thought of it. After I flagged the gap, it built a comprehensive 5‑agent scanner team that:
- Parses MuleSoft XML flows
- Catalogs DataWeave transforms
- Maps connectors to NuGet packages
- Generates a phased migration plan
Then I fed it domain knowledge it couldn’t discover on its own:
I already know the integrations we use – Azure AD auth, Key Vault for secrets, Graph API for user management, Box for documents, and SQL Server stored procedures
The scanner was pre‑seeded with ready‑to‑use C# implementation patterns for each integration – complete with typed clients, DI registration, and config examples. No hallucinated APIs. No outdated SDK calls.
Takeaway: The AI can’t read your production systems. Feed it what you know, and it’ll build on that knowledge rather than guessing.
Phase 3 – Challenge Architecture Decisions
Most people accept the AI’s first answer. Don’t. Debate it.
I initially suggested:
I think we should copy the source code into a docs folder inside the project
Then I immediately challenged my own idea:
Actually no – the source should stay external and never be modified. Ask the user for the path at runtime instead
That 30‑second back‑and‑forth produced a fundamentally better design:
- The MuleSoft project stays read‑only in its original location.
- The scanner asks for the path (
MULE_SOURCE_PATH). - No file duplication, no accidental modifications, clean separation.
The AI adapted instantly: it rewrote the Phase 0 init prompt, updated all agent‑team definitions to reference MULE_SOURCE_PATH, and added validation for the directory structure.
Takeaway: The best architecture emerges from debate, not from a single prompt. Push back on the AI – and on yourself. The AI is fast enough to restructure everything in seconds.
Phase 4 – Evolve the Design as Context Changes
Real projects don’t stand still while you plan. Mid‑session, the project structure changed.
I realized:
This is a reusable template, not the actual project – we need a scaffolding step that renames everything
Claude immediately generated init scripts for both Bash and PowerShell. Then another session handled it differently using the dotnet new template engine with sourceName:
Another session is handling the template config already, so we don't need the init script anymore – drop it
Most developers would forget to tell the AI about this, ending up with duplicate work, conflicting approaches, and docs that reference deleted files. One prompt – “drop it” – made Claude:
- Remove the scripts
- Update all cross‑references
- Simplify the workflow
Takeaway: When something else handles a concern, tell the AI to remove the work – not just add more. AI‑generated docs with stale references are worse than no docs.
Phase 5 – Quality Gate Before Shipping
AI can make consistency errors. Across six interconnected files you’ll find naming leaks, broken cross‑references, and stale paths. Always audit before shipping.
-
First issue:
The generated config still has hardcoded project name references – it needs to use generic placeholders -
Repo structure changed:
These files belong in a separate toolkit repo – here's the folder structure, reorganize everything to fit -
Final, most important prompt:
Do a deep review of every file – check cross‑references, path consistency, typos, and logical errors
Claude ran a full‑project review, corrected the placeholders, updated paths, fixed naming mismatches, and produced a clean, ready‑to‑commit toolkit.
Takeaway: Run a systematic, AI‑assisted audit before you consider the output “done.”
TL;DR Playbook
| Phase | Action | Why |
|---|---|---|
| 1 – Explore | Point the AI at the real repo (URL) | Grounding yields context‑accurate output |
| 2 – Deliverables | Ask for concrete artifacts (agent‑team definitions, scripts) | You get usable code, not prose |
| 3 – Challenge | Debate every design suggestion | Produces stronger architecture |
| 4 – Evolve | Update the AI when external tools change the plan | Avoids duplicate or stale work |
| 5 – Quality Gate | Run a deep, cross‑file audit | Catches consistency errors before shipping |
Follow these steps, and you can turn a multi‑week migration into a single‑session AI‑driven sprint. Happy migrating!
2‑point audit across all files
It found and fixed:
- Inconsistent placeholder names (
{Name}vs{ProjectName}) - Wrong MuleSoft directory paths in quick‑reference prompts
- A typo in a JSON config (
EnterprisId) - Stale “copied to” language that should have said “accessible at”
- Old file names in cross‑references
Without that final audit, I would have shipped docs that pointed to non‑existent files and used incorrect MuleSoft paths. The audit took 2 minutes. It would have cost hours of debugging later.
Takeaway: Never ship AI‑generated output without a final audit pass. Ask the AI to check its own work—it’s surprisingly good at catching its own mistakes when you explicitly ask.
The Playbook
Here’s the pattern, distilled:
- Start broad – let the AI propose; don’t over‑specify upfront.
- Ground it in real code – point at your actual repo, not a hypothetical one.
- Request usable artifacts – ask for the format your tooling actually consumes.
- Feed domain knowledge – tell it what you know about your integrations, constraints, and systems.
- Identify gaps – review output and flag what’s missing.
- Debate architecture – push back on assumptions, including your own.
- Evolve the plan – when context changes, update the AI and remove stale work.
- Audit everything – demand a thorough cross‑file review before shipping.
- 22 prompts. One conversation.
- A complete migration toolkit with scanner agents, phased migration prompts, integration patterns, setup guides, and project scaffolding – all verified, cross‑referenced, and ready to use.
The AI didn’t replace my judgment. It amplified it. Every architectural decision was mine. The AI just made it possible to execute on those decisions in hours instead of days.