Start Hacking Now: What a €XXM API Migration Taught Me About AI in Production

Published: (December 12, 2025 at 07:53 AM EST)
4 min read
Source: Dev.to

Source: Dev.to

Overview

Stressed manager, clever consultant, successful presentation.

This week at API Days Paris, I watched something rare: a client and consultant presenting together about an AI project that actually shipped to production.

Cyrille Martraire (CTO at Arolla) and Thomas Nansot (Director of Engineering at a major European mobility platform) walked through their journey using AI to validate a critical API migration—one handling hundreds of millions of tickets annually. What made it different? They shared the dead ends, the failed approaches, and how their final solution was nothing like the original plan.

The Problem

Thomas’s company needed to migrate from a legacy API to a new architecture. The stakes were high—any regression could affect millions of transactions. Traditional testing would be prohibitively expensive and slow, especially since every contract change meant redoing the work.

They needed a way to guarantee non‑regression between APIs with completely different structures.

The 6 Key Learnings

1. 🔨 Hack First, Polish Never

When Thomas reached out, Cyrille didn’t ask for requirements docs. He immediately built a hacky prototype with fake systems to prove the concept could work.

Lesson: In AI projects, velocity of learning beats polish. You can’t plan your way through uncertainty—you prototype your way through it.

# Quick prototype approach
legacy_response = fake_legacy_api()
new_response = fake_new_api()
ai_compares(legacy_response, new_response)
# Does it work? Kind of? Good enough to continue!

2. ⚡ AI Generates Code > AI Runs Tests

Their first production attempt was elegant: an AI agent does everything end‑to‑end. It was also broken:

  • Slow: 2+ minutes per test
  • Expensive: ~$1 per test run
  • Unreliable: Random failures

Breakthrough: Use AI to generate the test code, not run the tests.

# ❌ Approach 1: Live AI (expensive, slow)
for each test:
    result = ai.compare(legacy_api(), new_api())  # $$$

# ✅ Approach 2: Generated code (cheap, fast)
test_code = ai.generate_comparison_code()  # Once
for each test:
    result = test_code.run()  # $0, deterministic

Cost comparison

  • Live AI: $1 × 1000 tests = $1000
  • Generated: $2 to generate + $0 × 1000 = $2

Pattern: AI works “offline” to create tools; those tools do the actual work.

3. 🗄️ MCP: Query JSON Like a Database

The API schemas were massive. Cramming everything into the LLM context caused attention issues—even when it technically fit, quality degraded.

Solution: Model Context Protocol (MCP)

Instead of sending the whole JSON:

prompt = f"Analyze this entire JSON: {10mb_schema}"

use a query interface:

mcp = JSONMCPServer(huge_schema)
email = mcp.query_path("passenger.email")
keys = mcp.list_keys("journey.segments")

They specifically recommended the “JSON‑to‑MCP” tool.
Why it matters: MCP is like moving from “here’s a phone book” to “here’s a search interface,” enabling scalable LLM interactions.

4. 🎲 Accept Being Surprised

“As a manager, I expected to have a clear vision of what would work. I had to admit the solution was completely different from what I imagined—and it was better.” – Thomas

What they tried

  • Full AI approach → Too slow & expensive
  • Slice & compare → Too complex
  • Generated code + MCP → Success!

The winning solution wasn’t in the original plan. Short feedback cycles and willingness to pivot were essential.

Mindset: If you bring too much certainty to AI projects, you limit yourself. Let the technology surprise you.

5. 💾 Offline AI > Online AI (Sometimes)

Key insight: “AI is sometimes better offline.”

PatternUse CaseCostSpeed
Live AIDynamic decisions, personalizationHigh per useVariable
Generated AIRepetitive tasks, validationOne‑timeFast

Examples of offline AI

  • ✅ AI generates test suites → run 1000×
  • ✅ AI writes Terraform modules → apply repeatedly
  • ✅ AI creates validation rules → check all data
  • ✅ AI generates docs templates → reuse forever

6. 🎓 Knowledge Transfer > Expert Does Everything

After proving the technical concept, Cyrille’s role shifted from “maker” to “coach.”

Evolution

  1. External expert builds solution
  2. Proves it works, gets buy‑in
  3. Expert teaches internal team hands‑on
  4. Team runs it independently
  5. Learnings apply to other projects

Impact: Even AI‑skeptical engineers got excited about these techniques for their own work. Real value = solving the problem plus building internal capability.

Practical Takeaways

✅ Do

  • Start with hacky prototypes
  • Prefer generated artifacts over live decisions
  • Use MCP‑style patterns for large data structures
  • Plan for short feedback cycles
  • Build internal capability, not just solutions

❌ Don’t

  • Wait for perfect requirements
  • Assume “full AI” is always the answer
  • Fight context‑window limits—work around them
  • Plan everything upfront
  • Keep expertise external

The Messy Middle Is the Point

What I appreciated most was their honesty. Too many AI talks show polished end results and skip the dead ends. The dead ends are the story—where the learning happens. They didn’t have a perfect plan; they had a hypothesis, willingness to iterate, and courage to be surprised. That’s probably the most valuable lesson of all.

Demo Code

I created a working demo of these patterns: [GitHub link]

The repository includes:

  • MCP server for JSON querying
  • AI code‑generation examples
  • Fake APIs for quick prototyping
  • Generated vs. live AI comparison

Discussion Questions

  • Have you tried the “AI generates code” pattern in your projects? How did it compare to live AI?
  • What’s your biggest challenge with LLM context windows?
  • How do you balance exploration vs. planning in AI projects?

Speakers

  • Cyrille Martraire – CTO at Arolla, author of Living Documentation
  • Thomas Nansot – Director of Engineering managing ~150 engineers on a mobility distribution platform serving millions across Europe

Connect

Back to Blog

Related posts

Read more »

Renuncio a hacer consultoría de FinOps

Hace unos meses inicié a apoyar a diferentes clientes en la implementación de estrategias de optimización de recursos e infraestructura. Fue una decisión comple...