Building agents with the ADK and the new Interactions API
Source: Google Developers Blog
The landscape of AI development is shifting from stateless request‑response cycles to stateful, multi‑turn agentic workflows. With the beta launch of the Interactions API, Google provides a unified interface designed for this new era—offering a single gateway to both raw models and the fully managed Gemini Deep Research Agent.
For developers already working with the Agent Development Kit (ADK) and the Agent2Agent (A2A) protocol, the key question is how this new API fits into existing ecosystems. The Interactions API can serve both as an alternative to the generateContent inference endpoint and as a powerful primitive usable within an existing agent framework.
Pattern 1: Writing Agents with ADK and Interactions API
When you build an agent using the ADK (Agent Development Kit), you need a LLM like Gemini to generate thoughts, plans, tool calls, and responses. Previously this was handled by generateContent. The new Interactions API offers a native interface for complex state management, allowing ADK agents to offload conversation history and reasoning loops to the server.
Why switch?
- Unified Model & Agent Access – The same endpoint works for a standard model (
model="gemini-3-pro-preview") or a built‑in Gemini agent (agent="deep-research-pro-preview-12-2025"). - Simplified State Management – Optionally offload conversation history using
previous_interaction_id, reducing boilerplate in your ADK agent. - Background Execution – Set
background=Trueto receive an interaction ID immediately and let the server run long‑running tasks asynchronously. - Native Thought Handling – The API separates “thoughts” from final responses, enabling more effective processing of reasoning chains.
How it looks
Instead of managing a raw list of messages and sending them to generateContent, your ADK agent can maintain a lightweight pointer to server‑side state.
from google.adk.agents.llm_agent import Agent
from google.adk.models.google_llm import Gemini
from google.adk.tools.google_search_tool import GoogleSearchTool
root_agent = Agent(
model=Gemini(
model="gemini-2.5-flash",
# Enable Interactions API
use_interactions_api=True,
),
name="interactions_test_agent",
tools=[
# Converted Google Search to a function tool
GoogleSearchTool(bypass_multi_tools_limit=True),
get_current_weather,
],
)
For step‑by‑step instructions see the full ADK sample with the Interactions API.
This pattern lets you keep control flow and routing logic within the ADK while delegating heavy lifting of context management and inference state to the Interactions API. You can think of an inner loop (inside the API) and an outer loop (in your agent code), giving you more control over both.
Pattern 2: Using Interactions API Agents as Remote A2A Agents
The interoperability of the Agent2Agent (A2A) protocol shines when you want existing A2A clients or agents to consult the new Gemini Deep Research Agent. Historically, integrating a third‑party API required a custom wrapper. With the new InteractionsApiTransport, the A2A surface maps directly onto the Interactions API, allowing you to treat an Interactions endpoint as just another remote A2A agent.
How the Bridge Works
The InteractionsApiTransport layer translates A2A calls to Interactions API calls:
- A2A
SendMessage→ Interactionscreate - A2A
Task→ Interaction ID - A2A
TaskStatus→ Interaction Status (e.g.,IN_PROGRESSmaps toTASK_STATE_WORKING)
Note: A2A push notifications, extensions, and Interactions API callbacks are not yet supported.
Code Example: The Transparent Integration
Configure your A2A client factory with the new transport and create an AgentCard that points to the desired model or agent.
from interactions_api_transport import InteractionsApiTransport
from a2a.client import ClientFactory, ClientConfig
# 1. Configure the factory to support Interactions API
client_config = ClientConfig()
client_factory = ClientFactory(client_config)
# Setup the transport (handles API keys and auth transparently)
InteractionsApiTransport.setup(client_factory)
# 2. Create an AgentCard for the Deep Research agent
card = InteractionsApiTransport.make_card(
url="https://generativelanguage.googleapis.com",
agent="deep-research-pro-preview-12-2025"
)
# 2a. Or interact directly with a Gemini model
card = InteractionsApiTransport.make_card(
url="https://generativelanguage.googleapis.com",
model="gemini-3-pro-preview",
request_opts={
"generation_config": {"thinking_summaries": "auto"}
}
)
# 3. Create a regular A2A client
client = client_factory.create(card)
# 4. Use it exactly like any other A2A agent
async for event in client.send_message(
new_text_message("Research the history of Google TPUs")
):
# The transport converts Interactions API 'Thoughts' and 'Content'
# into standard A2A Task events.
print(event)
Why this matters
- No new SDKs to learn – Your A2A client code stays unchanged.
- Streaming support – The transport maps streaming events, giving real‑time updates from the agent.
- Configuration smuggling – A2A extensions let you pass specific settings (e.g.,
thinking_summaries) inside theAgentCardwithout breaking the protocol.
Conclusion
The Gemini Interactions API represents a major step forward in how we model AI communication. Whether you are building custom agents from scratch using frameworks like the ADK or connecting existing agents together via A2A, these new capabilities are ready to explore today.