Turning Exam Stress Into an AI Project: My AI Agents Intensive Experience
Source: Dev.to
If there’s one moment every student knows too well, it’s the night before an exam—when everything feels like it’s collapsing at once. I’ve had that moment more times than I’d like to admit: sitting at a desk cluttered with notes, jumping between panic and confusion, and quietly wishing for someone who could just help me sort out my thoughts.
Not another chatbot. Most LLMs never hit that mark for me. They responded, sure, but they didn’t feel the urgency. Sometimes they rambled, sometimes they hallucinated, sometimes they just missed the emotional tone entirely.
So when I joined the Google & Kaggle AI Agents Intensive, I hoped for at least one new idea… and ended up gaining much more.
Challenges Before the Intensive
- Boilerplate exhaustion – Half my time was spent plugging pieces together instead of building anything meaningful. Tools here, routes there, configs everywhere—it drained the excitement fast.
- Memory and session confusion – I never fully trusted whether the agent remembered what it was supposed to. Sometimes it held onto context, sometimes it forgot entirely. I wanted to understand why, not just memorized rules.
- Deployment fear – One tiny mistake—an environment mismatch, a missing key—and everything blew up. It felt like walking into an exam without preparing. Terrifying.
- Billing worries as a student – Many frameworks rely on API keys tied to credit cards. As someone without income—and with a family situation that doesn’t make these conversations easy—I constantly worried about:
- random charges
- rate limits
- automatic renewals
These concerns made experimentation stressful.
ADK Experience
Day 1–2: Lightening the Load
When I saw “ADK” on the course schedule, my first reaction was “Oh no… another thing to struggle with.” Yet the first two days gave me a completely different impression.
- Repetitive wiring disappeared.
- Routing and memory were handled neatly.
- The structure made sense right away.
I could finally focus on the logic, not the plumbing. It felt like someone built tools for the way my brain already wanted to work.
Day 3: The Toughest Part
Nothing behaved the way I expected:
- The agent sometimes forgot context midway.
- Tool calls changed everything.
- Session state and memory state felt like identical twins I kept mixing up.
I replayed labs repeatedly, dug through discussions, and used run_debug so many times it became second nature. After that moment, everything clicked.
- Memory and sessions finally made sense.
- The router felt intuitive.
- Tools behaved consistently.
google_searchstopped acting unpredictably.- The
ParallelAgentno longer scared me. - Debugging transformed into actual reasoning.
It was the first time I genuinely felt capable of building a proper multi‑agent system instead of forcing one together.
The Last‑Minute Learning Copilot
Architecture Overview
The project grew directly from my lived experience as a student. It’s designed to be more than an answer bot—it notices confusion, senses urgency, calms stress, explains concepts clearly, builds realistic study plans, and uses tools intelligently, combining everything into one grounded response.
| Agent | Role |
|---|---|
| RouterAgent | Decides which part of the system should take over: • Concept questions → ExplanationAgent • Planning → StudyPlannerAgent • Emotional or conceptual struggle → StressAgent + WeakTopicAgent • Fallback for unclear intents |
| StudyPlannerAgent | Checks the time and builds a realistic schedule—crucial during last‑minute prep. |
| StressAgent + WeakTopicAgent | Run together to detect emotional stress, misunderstanding, and weak areas that need revision. |
| ExplanationAgent | Provides structured explanations, grounded facts, and calls google_search only when necessary. |
| OrchestratorAgent | Acts as the heart of the whole setup, stitching together responses from the specialist agents. |
Video Creation
I wanted students to feel the story behind the system, not just read about it. I recorded my own voice‑over, which helped showcase the human side of the project alongside the technical details.
First Clean Run
- The router picked the right paths.
- The planner generated a realistic study plan.
- The stress agent responded calmly.
- The explanation agent fetched accurate info.
- The orchestrator wrapped everything into one coherent answer.
For the first time, it didn’t feel like pieces glued together—it felt like a cohesive team of specialists.
Key Takeaways
- ADK removes a lot of friction I used to struggle with.
- Memory and session design are critical to get right.
- Multi‑agent setups solve problems that a single model can’t.
run_debugis basically a flashlight in a dark tunnel.- Adding creative storytelling truly elevates a technical project.
- Real problems make the best project ideas.
Personal Growth
- I became more confident as an AI engineer.
- I gained a clearer design mindset.
- I learned to mix creativity with technical work.
- I feel more aligned with the direction I want to take in AI.
The Last‑Minute Learning Copilot isn’t just a project—it’s something I genuinely wish existed when I needed it most.
Project Code
The full code, from early routing tests to the final multi‑agent flow, is documented in my Kaggle notebook:
👀 Look Inside the Machine Room
(Replace # with the actual URL to the notebook.)
Huge thanks to Google and Kaggle for creating a course that blends theory, practice, intuition, and creativity so well. This experience didn’t just teach me how to build agents— think like one.
Tags: aiagents google kaggle ai reflection learning multiagent adk