How the 5-Day Intensive Felt
Source: Dev.to
Introduction
Before this course, “AI agents” for me were basically just LLMs with a couple of tools glued on. Over the 5‑Day AI Agents Intensive with Google and Kaggle, that changed a lot—agents started to feel more like teammates that can follow goals, call the right tools, and leave a trail of reasoning you can actually inspect. The focus on the AI Canvas, routing, and traces made me think less about single prompts and more about how the whole system behaves over time.
The core idea that stuck with me is that agents are not just “chat completions,” but systems that can plan, act, remember, and be measured like any other piece of software. That mindset shift ended up shaping the way I built my capstone project, Orca.
What the 5 Days Covered
Day 1 – Agent Architectures & “From Prompt to Action”
Explained how a user request turns into plans, tool calls, and loops instead of a single response.
Day 2 – Agent Tools & Best Practices
Covered the Agent Development Kit (ADK) and Model Context Protocol (MCP) for safely connecting agents to real APIs and services.
Day 3 – Agent Sessions & Memory
Discussed managing short‑term context and longer‑term knowledge so agents can handle multi‑turn tasks and remember what matters.
Day 4 – Agent Observability & Evaluation
Focused on logging, tracing, metrics, and evaluation runs in the ADK UI and via the CLI.
What I Built: Orca
Orca uses custom tools to grab real market data, compute indicators, and run forecasts before the agents interpret anything. The labs on tool calling and step‑by‑step traces were especially helpful: when an agent picked the wrong tool or mis‑interpreted the output, the trace made the error obvious.
How the Course Shaped Orca
- Day 2 gave me a solid template for building data and indicator tools—small, focused, and predictable, so agents can call them safely.
- Day 3 inspired a memory layer that can retain a user’s risk profile, watchlist, or previous decisions while respecting financial data privacy.
- Day 4 pushed me to treat traces and evaluation runs as first‑class features, turning Orca from a black box into a “glass box” for financial decisions.
What Stood Out in the Labs
Giving each agent a clear, narrow role made the system much cleaner. Instead of one overloaded “smart” agent, a few focused agents simplified debugging and explanation, which is crucial for transparency and trust in finance.
How My View of Agents Changed
For Orca, the goal shifted from a black‑box model to a glass‑box experience. Traces, intermediate reasoning, and small debates between agents are now part of the product experience, especially when users are making real‑money decisions.
Where I Want to Take This Next
The biggest shift for me is the questions I now ask: it’s no longer just “How do I prompt this model?” but “How do I design an agentic system that people can rely on, debug, and improve over time?” Orca is my first serious attempt at answering that question, and the intensive made it feel possible.
Try Orca Yourself
- Live app:
- GitHub repository:
- Demo video (2 minutes):