My Journey Through the AI Agents Intensive, Building an AI Personal Safety & Emergency Assistant
Source: Dev.to
What I Learned From the 5‑Day Intensive
Day 1 — The Fundamentals
- Reasoning loops, agent instructions, routing, and agent orchestration.
Day 2 — Tools
- How agents use tools to extend their abilities beyond text.
- Applied later to build simulated SMS, email, and call alerts.
Day 3 — Memory
- Agents that remember past interactions behave more intelligently.
- Implemented risk trend detection where repeated danger messages trigger escalation.
Day 4 — Evaluation & Observability
- Techniques for testing, tracking, and debugging agent behavior—essential for safety systems.
Day 5 — Agent‑to‑Agent Communication
- Enabled agents to collaborate and form a full workflow, allowing the design of a multi‑agent pipeline for emergency response.
My Capstone Project: AI Personal Safety & Emergency Assistant
Problem
Millions face emergencies where they cannot call or message family or police. Seconds matter, and AI can react faster than humans.
Why Agents?
Agents fit this problem because they can:
- Detect danger from text inputs
- Decide the correct action
- Trigger emergency‑like responses
- Guide users step‑by‑step
- Escalate automatically when needed
Traditional chatbots cannot do this. Agents can.
Architecture I Built
The system includes three cooperating agents and supporting modules.
1. Risk Detector Agent
Classifies messages as:
SAFEEMERGENCY
2. Action Planner Agent
Decides what to do:
- reassure
- ask more details
- escalate to emergency mode
3. Responder Agent
Provides urgent step‑by‑step instructions in critical situations.
Memory Module
Tracks:
- previous messages
- previous risk levels
- escalation patterns
Tool Simulation
Implemented safe simulated tools:
def send_sms_alert(phone_number: str, message: str) -> None:
"""Simulate sending an SMS alert."""
pass
def send_email_alert(email: str, subject: str, body: str) -> None:
"""Simulate sending an email alert."""
pass
def send_call_alert(phone_number: str) -> None:
"""Simulate initiating a voice call alert."""
pass
Gemini Integration (Mock Model)
Demonstrates danger classification and emergency message generation. All of this was done inside a Kaggle Notebook.
What I Tested
Evaluated the agents across several scenarios:
- Clear emergency – “I am bleeding, please help!”
- Safe message – “I reached home safely.”
- Ambiguous risk
- Escalating danger – “Someone is following me → I am in danger → He is attacking me”
The system behaved consistently, escalated correctly, and triggered alerts responsibly.
What This Project Taught Me
- Multi‑agent systems are extremely powerful – Combining simple agents created a system far more intelligent than a single model.
- Memory changes everything – When agents remember context, their decisions become smarter.
- Tools transform agents into action‑takers – Even simulated tools felt like building the foundation of a real safety product.
- Clear instructions matter more than code – Well‑written agent instructions are as important as model power.
If I Had More Time
- Voice‑based danger detection
- GPS‑based location alerts
- Mobile app interface
- Real API integrations (Twilio, WhatsApp)
- Deployment on Cloud Run / Agent Engine
Final Thoughts
The Google × Kaggle AI Agents Intensive didn’t just teach me agents—it taught me how to build impactful, real‑world AI systems. My capstone project, AI Personal Safety & Emergency Assistant, is only the beginning, and this course has opened the door to a new world of possibilities.
Thank you, Google, Kaggle, and the entire AI Agents community.