'Teaching AI to Teach: My 5-Day Journey Building an AI Literacy Agent'

Published: (December 14, 2025 at 05:52 AM EST)
4 min read
Source: Dev.to

Source: Dev.to

Introduction

From “people don’t know what to ask AI” to a multi‑agent learning system—my journey through the Google AI Agents Intensive. This reflection was submitted for the Google AI Agents Writing Challenge: Learning Reflections.

As a developer, researcher, and AI‑literacy educator, I often see the same pattern: users have access to powerful models (ChatGPT, Claude, Gemini, etc.) but struggle because they don’t know which questions to ask. Vague prompts yield generic answers, leading to the conclusion that “AI isn’t that useful.” Users also share sensitive data without understanding privacy risks and accept AI outputs uncritically, never moving beyond the basics.

When Google and Kaggle announced the 5‑day AI Agents Intensive, I saw an opportunity to build not another chatbot, but a learning scaffold: the AI Literacy Guardian, a multi‑agent system that teaches people how to think critically with AI.

Day 1 – A Shift in Mental Model

The course introduced multi‑agent systems, and the concept clicked. I realized AI‑literacy education isn’t a single problem with a single solution; it’s an orchestration of multiple specialised functions:

  • Explaining concepts clearly
  • Identifying risks proactively
  • Creating practice exercises
  • Tracking learning progress

A single LLM can’t excel at all of these simultaneously, but specialised agents can. This insight shaped the capstone project.

Day 2 – Tool Integration

Agents are more than LLMs with fancy prompts; they become capable systems when equipped with specialised tools. I built:

  • RiskScanner – performs systematic safety evaluations on every interaction.
  • PromptGenerator – creates structured educational examples.
  • LearningSummarizer – synthesises conversation patterns across sessions.

These tools transformed the agents from “smart responders” into reliable, repeatable components.

Day 3 – The Manager Agent

Initially I thought orchestration was just routing queries to the right agent. The labs showed that a manager agent must also:

  • Maintain state and conversation history (last 10 turns)
  • Track user profiles and adapt based on prior interactions

Implementing these features turned the system from a stateless Q&A bot into a learning companion. The manager agent acts as the nervous system of the multi‑agent architecture.

Day 4 – Evaluation as a Core Component

Inspired by the “LLM‑as‑a‑Judge” concept and a course on Applied Generative AI (Johns Hopkins University), I added an automated quality evaluator that scores each response on:

  1. Clarity
  2. Helpfulness
  3. Safety
  4. Accuracy
  5. Engagement

Systematic measurement enables continuous improvement.

Day 5 – Embedding Ethics

Responsible AI development was a recurring theme. I made EthicsGuardianAgent a first‑class citizen, not an afterthought. It proactively scans every query for privacy, ethical, and security risks. For example, when a user asks, “Can I upload student essays to ChatGPT?” the agent immediately flags HIGH RISK, explains FERPA violations, and suggests safer local alternatives.

System Architecture

  • Manager Agent (AILiteracyGuardian) – orchestrates four specialist agents.
  • ExplainerAgent – teaches concepts with adaptive analogies.
  • EthicsGuardianAgent – identifies risks proactively.
  • ExampleBuilderAgent – creates side‑by‑side good/bad prompt demonstrations.
  • SkillTrackerAgent – tracks progress across sessions.

Four custom tools empower the agents:

  1. ConceptStructurer – organizes educational content.
  2. PromptGenerator – produces structured examples.
  3. RiskScanner – performs safety checks.
  4. LearningSummarizer – synthesises learning patterns.

Differentiator

The system is proactive guidance, not reactive Q&A. It teaches users what questions matter, identifies risks before mistakes happen, and builds skills through structured practice.

Vision & Value

I prototyped an AI Literacy Passport that could evolve into a curriculum with progressive missions (e.g., “The Truth Test,” “The Weak Prompt Challenge”), badge rewards, level progression, and competency gates. The goal is to shift AI literacy from “learn about AI” to “become competent with AI.”

Roadmap

  • Immediate – Full AI Literacy Passport with interactive missions and competency validation.
  • Soon After – Multi‑language support (Spanish, French, Mandarin, Hindi) with culturally adapted examples.
  • Long‑term – Domain‑specific versions (Healthcare for HIPAA, Education for FERPA/COPPA, Legal for professional ethics) and teacher dashboards.

Lessons Learned

  1. Start with the problem, not the technology – Build agents only when orchestration of specialised capabilities is required.
  2. Embrace the labs – Hands‑on exercises turn abstract concepts into concrete understanding; debugging routing logic taught me more than any documentation.
  3. Think beyond “pass the course” – A clear vision (the AI Literacy Passport) kept me motivated through late‑night troubleshooting and video production challenges.

Before vs. After the Intensive

Before the IntensiveAfter the Intensive
Saw agents as “fancy LLM wrappers with routing”Understand agents as specialised, orchestrated systems
Thought multi‑agent systems were overkillRecognise when problems need agent architectures (multiple specialised functions)
Focused on single‑model solutions (e.g., RAG)Appreciate the transformative power of tool integration and state management
Assumed most problems could be solved with a single promptRealise some problems fundamentally require orchestrated agents

The course didn’t just teach me how to build agents—it fundamentally changed how I think about AI system design.

Back to Blog

Related posts

Read more »