Google Just Made Every Android App an AI Agent Tool — Here's What's Missing
Source: Dev.to
Google AppFunctions – A New AI‑Driven App Integration Layer
Google just announced AppFunctions — a framework that lets Android apps expose their capabilities directly to AI agents. Instead of opening Uber and tapping through screens, you tell Gemini “get me a ride to the airport” and it calls the function directly.
Google’s own blog post says it: “AppFunctions mirrors how backend capabilities are declared via MCP cloud servers.”
This isn’t a coincidence. It’s the same pattern — tools exposed to AI agents via structured function calls — applied to mobile, and it inherits the same security gap.
Two things are happening
-
Structured function exposure
- App developers annotate their code with
@AppFunction, declaring what their app can do (search photos, book rides, create reminders). - AI agents discover these functions and call them directly.
- The app never opens, and the user never sees a UI.
- App developers annotate their code with
-
UI automation
- For apps that haven’t adopted AppFunctions, Google is building a framework where Gemini can operate the app’s UI autonomously — tapping buttons, filling forms, navigating screens.
- No developer integration needed; the AI just drives the app like a human would.
Both are live on Galaxy S26 and Pixel 10 devices today.
The MCP Analogy
If you work with MCP (Model Context Protocol), this will feel familiar:
MCP AppFunctions
------------------- -------------------
Server exposes tools App exposes functions via @AppFunction
via tools/list (e.g., search_photos, book_ride)
Agent calls tools/call
with arguments Agent calls function
with parameters
Runs on desktop/server Runs on‑device
Claude, Cursor, Windsurf → MCP
Gemini → AppFunctions
Google explicitly acknowledges this — they call AppFunctions the “on‑device solution” that mirrors MCP cloud servers. Same architecture, different runtime.
Stated Safety Measures (and What’s Missing)
Google says they’re “designing these features with privacy and security at their core.” Their safety description includes:
- Users can monitor task progress via notifications.
- Users can switch to manual control.
- Gemini alerts users “before completing sensitive tasks, such as making a purchase.”
What’s not there?
- No policy engine.
- No per‑function access control.
- No rate limiting.
- No argument validation.
Security model: trust the agent, notify the user.
If you’ve worked with AI agents in production, you know why this is concerning. Agents can hallucinate, misinterpret instructions, chain actions in unexpected ways, and are vulnerable to prompt‑injection attacks.
Potential Attack Surface
Current exposed functions (Calendar, Notes, Tasks, Samsung Gallery) are relatively benign, but Google plans to expand to:
- Food delivery
- Grocery ordering
- Rideshare
…and open it to all developers in Android 17.
Imagine the same model applied to:
| App Type | Example Function |
|---|---|
| Banking | transfer $500 to this account |
send this email to my entire contact list | |
| Enterprise | export all customer records |
| Payments | send money to … |
Each is just a function call, and the only barrier between the agent and execution could be a notification or a simple “Are you sure?” prompt.
Real‑world precedent: Claude Code deleted 2.5 years of production data last week—not maliciously, but because it misunderstood an instruction and had unrestricted access to destructive tools.
The Needed Fix: Enforcement Layer
The solution isn’t to trust agents less or strip capabilities. Structured, discoverable, typed function calls are a step forward compared to screen‑scraping. What’s missing is an enforcement layer between the agent and the function.
How we solved it in MCP – Intercept
Intercept is a transparent proxy that evaluates every tool call against a YAML policy before forwarding it:
tools:
send_money:
rules:
- name: "cap transfers"
conditions:
- path: "args.amount"
op: "lte"
value: 10000
on_deny: "Transfer exceeds $100 limit"
delete_account:
rules:
- name: "block deletion"
action: "deny"
search_photos:
rules:
- name: "rate limit"
rate_limit: 20/minute
- The agent never sees these rules.
- It can’t negotiate around them.
- Enforcement happens at the transport layer → deterministic, not probabilistic.
AppFunctions needs the same pattern: a policy engine that lets users and enterprises define exactly what agents can and can’t do, enforced mechanically, not by asking the agent to behave.
The Broader Landscape
| Platform | Protocol / Feature | Current State |
|---|---|---|
| MCP | Anthropic’s protocol for tool access (desktop/server) | No standard enforcement layer |
| AppFunctions | Google’s protocol for tool access (Android) | No enforcement layer |
| WebMCP | Google’s protocol for tool access (Chrome) | No enforcement layer |
| AWS Bedrock AgentCore | Amazon’s agent gateway (cloud) | No enforcement layer |
Every major platform is building a way for AI agents to call functions, but none have shipped a standard enforcement layer. They’re building the gas pedal and leaving the brakes to someone else.
That someone else is what we’re building at PolicyLayer. Intercept works at the MCP layer today, and its architecture—transparent proxy, deterministic policy, transport‑layer enforcement—is protocol‑agnostic. The same pattern applies to AppFunctions, WebMCP, and whatever comes next.
What Must Exist Before AI Agents Operate Our Apps at Scale
-
Declarative policies
- Users and enterprises define rules as structured, auditable policy files (not natural‑language prompts).
- Example: “This agent can search photos but can’t send messages.”
- “Transfers are capped at $100.”
- “No more than 5 actions per minute.”
-
Transport‑layer enforcement
- Policies are enforced below the model context.
- The agent shouldn’t know the rules exist, shouldn’t be able to reason about them, and definitely shouldn’t be able to override them.
-
Audit trails
- Every function call and every policy decision is logged in a structured format.
- When something goes wrong (and it will), you need to know exactly what happened.
Google may eventually bake some of this into Android, but the brakes need to be built now—and that’s where a solution like PolicyLayer comes in.
Overview
Waiting for platform vendors to solve security after shipping capability is how every major vulnerability in computing history has played out.
The enforcement layer needs to exist independently of the platform—that’s what open‑source infrastructure is for.
Our Solution
We’re building Intercept — an open‑source enforcement proxy for AI‑agent tool calls.
- Works with MCP today.
- The architecture applies to any agent‑to‑tool protocol.
Get Involved
Check it out on GitHub:
https://github.com/your‑org/intercept (replace with the actual repository URL)