Autonomous Agents Visiting Data
Source: Dev.to

This is a submission for the Google AI Agents Writing Challenge: Learning Reflections
Originally published on the FAIRLYZ Knowledge Base.
Google AI Agents Intensive Course
The Google AI Agents Intensive Course (First 5DGAI) debuted in March 2025 and returned in November 2025 (second 5DGAI), offering developers a front‑row seat to the rapid evolution of agentic systems.
- First 5DGAI focused on foundational skills: writing prompts, training agents, customizing them using Retrieval‑Augmented Generation (RAG), and deploying them via MLOps. Participants learned to fine‑tune models and integrate external knowledge sources to improve agent performance.
- Second 5DGAI (November 2025) advanced significantly. Developers were trained to build autonomous agents capable of managing other agents and tools and deploying them via Agent Ops (ADK, Vertex AI, Kubernetes), reflecting the growing complexity of real‑world AI deployments.
The November 2025 Introduction to Agents White Paper (link) introduced a more formalized framework for understanding these systems, especially in the section titled Taxonomy of Agentic Systems (pages 14–18).
Taxonomy of Agentic Systems
The taxonomy outlined in the November 2025 white paper breaks down agentic systems into five key levels:
Level 0: Core Reasoning System
A standalone language model that relies solely on its pre‑trained knowledge.
(e.g., ChatGPT‑3, 2022–2023)
Level 1: Connected Problem‑Solver
Gains tool access to fetch real‑time data and interact with external systems (e.g., APIs, RAG).
(e.g., ChatGPT‑4, 2023, with plugins, browsing, code interpreter)
Level 2: Strategic Problem‑Solver
Introduces context engineering — multi‑step planning, curated information, complex missions.
(e.g., Gemini 1.5 Pro, GPT‑4 Turbo, 2024, with memory and tool chaining)
Level 3: Collaborative Multi‑Agent System
Agents delegate to specialized sub‑agents; scalable, parallel workflows.
(e.g., Google DeepMind multi‑agent demos, OpenAI Dev Day agent frameworks, late 2024–2025)
Level 4: Self‑Evolving System
Agents autonomously create new agents/tools to fill capability gaps.
(No verified examples in 2025)
Agents Use in Data Visitation and Security Concerns
As agents gain access to sensitive data and tools, security becomes central. During the Day 2 live‑stream, Alex Wissner‑Gross highlighted the risks and proposed a vision:
“I foresee an internet of agents, with one singleton agent per corporation who shares secrets with sub‑agents but does not expose them to the outside.”
The white paper warns that tool access and autonomy introduce a delicate balance between utility and risk, especially when agents operate across organizational boundaries.
Recommended Mitigation Strategies
- Role‑based access control for agents
- Audit trails for tool invocation and data access
- Memory partitioning to prevent leakage across tasks
- Prompt injection defenses via adversarial training and specialized security‑analyst agents
Final Thoughts
The Google AI Agents Intensive Course reflects technical progress and surfaces ethical and operational challenges in deploying autonomous systems. As we move toward an internet of agents, frameworks like the Taxonomy of Agentic Systems and security models proposed by experts will be critical.
Tags: Autonomous Agents, Internet‑of‑agents (IoA), Multi‑Agent System, Secrets, Self‑Evolving System