Building agents with the ADK and the new Interactions API

Published: (January 19, 2026 at 07:41 PM EST)
5 min read

Source: Google Developers Blog

DEC 11, 2025

Introduction

The landscape of AI development is shifting from stateless request‑response cycles to stateful, multi‑turn agentic workflows. With the beta launch of the Interactions API, Google provides a unified interface designed specifically for this new era—offering a single gateway to both raw models and the fully managed Gemini Deep Research Agent.

For developers already working with the Agent Development Kit (ADK) and the Agent2Agent (A2A) protocol, the key question is:

How does this new API fit into my existing ecosystem?

The answer is two‑fold. The Interactions API can serve both as an alternative to the existing generateContent inference endpoint and as a powerful primitive you can use within an existing agent framework.

In this post we’ll explore two primary integration patterns:

  1. Powering your ADK agents – Use the Interactions API as the inference engine for your custom agents.
  2. The Transparent Bridge – Collaborate with built‑in agents (e.g., Gemini Deep Research Agent) as standard remote A2A agents via the Interactions API.

gfd-blog-banner-interactions-api-adk-a2a

Pattern 1: Writing Agents with ADK and the Interactions API

When you build an agent using the ADK (Agent Development Kit), you need a LLM (e.g., Gemini) to generate thoughts, plans, tool calls, and responses. Previously this was handled by generateContent.

The new Interactions API offers a native interface for complex state management. By upgrading your inference calls to this endpoint, ADK agents gain access to capabilities designed specifically for agentic loops.

Why switch?

BenefitDescription
Unified Model & Agent AccessThe same endpoint works for a standard model (model="gemini-3-pro-preview") or a built‑in Gemini agent (agent="deep-research-pro-preview-12-2025").
Simplified State ManagementOptionally offload conversation‑history handling to the server with previous_interaction_id, reducing boiler‑plate in your ADK agent.
Background ExecutionLong‑running tasks (e.g., the Deep Research agent) can run in the background. Set background=True to receive an interaction ID immediately, then poll for the final result.
Native Thought HandlingThe API models “thoughts” separately from final responses, letting your ADK agent process reasoning chains more effectively.

How it looks

Instead of managing a raw list of messages and sending them to generateContent, your ADK agent can keep a lightweight pointer to the server‑side state.

from google.adk.agents.llm_agent import Agent
from google.adk.models.google_llm import Gemini
from google.adk.tools.google_search_tool import GoogleSearchTool

root_agent = Agent(
    model=Gemini(
        model="gemini-2.5-flash",
        # Enable Interactions API
        use_interactions_api=True,
    ),
    name="interactions_test_agent",
    tools=[
        # Converted Google Search to a function tool
        GoogleSearchTool(bypass_multi_tools_limit=True),
        get_current_weather,
    ],
)

For step‑by‑step instructions, see the full ADK sample with the Interactions API.

This pattern lets you keep the control‑flow and routing logic inside ADK while delegating heavy‑lifting of context management and inference state to the Interactions API. Think of it as an inner loop (handled by the API) and an outer loop (your agent code). The new API gives you finer control over both.

Pattern 2: Using Interactions API Agents as Remote A2A Agents

This is where the interoperability of the Agent‑to‑Agent (A2A) protocol shines.

If you already have an ecosystem of A2A clients or agents, you might want them to consult the new Gemini Deep Research Agent. Historically, integrating a third‑party API required writing a custom wrapper or adapter.

With the new InteractionsApiTransport, we have mapped the A2A protocol surface directly onto the Interactions API surface. It “speaks” A2A, so you can treat an Interactions API endpoint as just another remote A2A agent. Your existing clients don’t need to know they are talking to a Google‑hosted agent; they just see an AgentCard and send messages as usual.

How the Bridge Works

The InteractionsApiTransport layer translates A2A concepts to Interactions API concepts:

A2AInteractions API
SendMessagecreate
TaskInteraction ID
TaskStatusInteraction Status (e.g., IN_PROGRESSTASK_STATE_WORKING)

Note: A2A push notifications, A2A extensions, and Interactions API callbacks are not yet supported in this mapping.

Code Example: Transparent Integration

from interactions_api_transport import InteractionsApiTransport
from a2a.client import ClientFactory, ClientConfig

# 1️⃣ Configure the factory to support Interactions API
client_config = ClientConfig()
client_factory = ClientFactory(client_config)

# Set up the transport (handles API keys and auth transparently)
InteractionsApiTransport.setup(client_factory)

# 2️⃣ Create an AgentCard for the Deep Research agent
# This helper builds the card with the necessary “smuggled” config
card = InteractionsApiTransport.make_card(
    url="https://generativelanguage.googleapis.com",
    agent="deep-research-pro-preview-12-2025"
)

# 2a️⃣ Or interact directly with a Gemini model
card = InteractionsApiTransport.make_card(
    url="https://generativelanguage.googleapis.com",
    model="gemini-3-pro-preview",
    request_opts={
        "generation_config": {"thinking_summaries": "auto"}
    },
)

# 3️⃣ Create a regular A2A client
client = client_factory.create(card)

# 4️⃣ Use it exactly like any other A2A agent
async for event in client.send_message(
    new_text_message("Research the history of Google TPUs")
):
    # The transport converts Interactions API “Thoughts” and “Content”
    # into standard A2A Task events.
    print(event)

Why This Matters

  • Zero SDK churn: Your A2A client code stays unchanged.
  • Streaming support: The transport maps streaming events, giving you real‑time updates from the agent.
  • Configuration smuggling: A2A extensions let you pass specific settings (e.g., thinking_summaries) inside the AgentCard without breaking the standard protocol.

In short, the Interactions API becomes transparent to your developer experience, giving you immediate access to powerful tools like Deep Research without refactoring your multi‑agent system. And the best part? It just works.

Conclusion

The Gemini Interactions API marks a major step forward in modeling AI communication. Whether you’re:

  • Building custom agents from scratch using any framework (e.g., the ADK)
  • Connecting existing agents together via A2A

you now have a powerful new set of capabilities to explore today.

Treat the API as both a superior inference engine and a compliant remote agent to rapidly expand the capabilities of your agentic mesh with minimal friction.

Stay tuned—more ADK and A2A resources will be released over the next few weeks to help developers adopt this API.

Get started today

Back to Blog

Related posts

Read more »

Real-World Agent Examples with Gemini 3

markdown Dec 19, 2025 We are entering a new phase of agentic AI. Developers are moving beyond simple notebooks to build complex, production‑ready agentic workfl...