On-Device Function Calling in Google AI Edge Gallery

Published: (March 1, 2026 at 07:38 PM EST)
6 min read

Source: Google Developers Blog

February 26, 2026

The real magic of AI happens when a model stops merely describing the world and starts interacting with it. One such interaction mechanism is tool‑use: the ability to predict and invoke function calls (e.g., opening apps or adjusting system settings).

Shifting tool‑use on‑device allows developers to build interactions that respond instantly while remaining fully functional regardless of connectivity. This enables, for instance, a natural‑language voice assistant to instantly create a calendar entry or navigate to a destination while you’re driving.

However, bringing this level of capability to mobile remains a formidable task. Traditional function‑calling has historically required large models with memory footprints far exceeding mobile hardware constraints. The real engineering challenge is to compress these models into a mobile footprint while maintaining accuracy and without draining the battery.

  • Cross‑platform availability – Building on the cross‑platform capabilities of Google AI Edge, we are delighted to bring Google AI Edge Gallery to iOS (App Store link) in addition to Android (Play Store link). Developers can now explore the same high‑performance, on‑device AI use cases powered by Gemma and other open‑weight models directly within the iOS ecosystem.

  • Agentic experiences integrated – The out‑of‑the‑box agentic experiences Mobile Actions and Tiny Garden are now directly available in the gallery, showcasing how Google’s efficient FunctionGemma model translates natural language into function calls on‑device using only 270 M parameters.

  • Built‑in benchmarking – Leveraging the state‑of‑the‑art performance benchmarks from our recent LiteRT announcement, the app now includes a benchmarking feature. You can measure and experience LiteRT’s leading CPU and GPU performance across your own devices.

These updates make it easier than ever to develop, test, and experience on‑device AI that is fast, private, and always available. 🚀

Experience Mobile Actions and Tiny Garden

The Mobile Actions demo showcases how FunctionGemma can turn a phone‑based assistant into a fully offline experience. The model parses natural‑language commands—e.g.,

  • “Show me the San Francisco airport on map.”
  • “Create a calendar event for 2:30 PM tomorrow for a cooking class.”
  • “Turn on the flashlight.”

…and maps each request to the appropriate OS tool or app intent.

The Tiny Garden demo is an interactive mini‑game that lets players tend a virtual plot using voice commands. A command such as “Plant sunflowers in the top row and water them” is broken down by the model into specific app functions (e.g., plantCrop, waterCrop) with the correct grid coordinates. This demonstrates how Google’s compact 270 M Parameter FunctionGemma model can adapt to highly specific, custom game or app logic directly on a mobile device—no server calls required.

Demo videos

Mobile Actions on Android

<video controls>
  <source src="URL_TO_MOBILE_ACTIONS_VIDEO.mp4" type="video/mp4">
  Sorry, your browser doesn't support playback for this video.
</video>

Tiny Garden on Android (and iOS as of today)

<video controls>
  <source src="URL_TO_TINY_GARDEN_VIDEO.mp4" type="video/mp4">
  Sorry, your browser doesn't support playback for this video.
</video>

If you prefer pure Markdown, you can also link to the videos:

Get started with your own use case

Now that you’ve seen the demos, you can adapt this approach for your own projects:

  1. Fine‑tune your model: see the guide on fine‑tuning for Mobile Actions.
  2. Implement function calling in your app: follow the Function Calling Guide.

Happy building!

Building on the cross‑platform capabilities of Google AI Edge, we’re thrilled to bring the full Android experience to the iOS ecosystem with the launch of the Google AI Edge Gallery on the App Store.

What’s Inside

  • Multi‑turn AI Chat – Conversational AI that stays on‑device.
  • Ask Image – Query images locally for instant insights.
  • Audio Scribe – On‑device transcription without sending data to the cloud.
  • Agentic Demonstrations
    • Mobile Actions – Sophisticated tool‑calling and function‑calling on Apple hardware.
    • Tiny Garden – A playful showcase of on‑device AI capabilities.

By leveraging the unified power of the Google AI Edge stack, we ensure the best of on‑device performance, privacy, and offline reliability is accessible to everyone—regardless of mobile platform.

See Mobile Actions in Action on iOS

Sorry, your browser doesn’t support playback for this video.

(Insert video embed or screenshot here when publishing on the appropriate platform.)

Test model performance in app

Want to see this speed in action? You can now benchmark these models directly within the Gallery app on your own devices (available on Android; iOS coming soon).

Using Mobile Actions as an example, the performance is blazingly fast on CPU—1916 tokens/sec (prefill) and 142 tokens/sec (decode) on a Pixel 7 Pro.

How to run your own benchmarking tests

  1. Open the menu – tap the hamburger icon in the top‑left corner of the Gallery app.
  2. Select Models – tap the Models tile to view the full list of downloadable models.
  3. Benchmark – press the benchmark button and experiment with configurations (adjust prefill/decode tokens or the number of runs) to see exactly how FunctionGemma performs on your hardware.

Benchmark example

Try it now on Android and see how the model performs on your phone!

Get started today

Ready to build your first local agent? Here’s how you can dive in:

  • Explore the demos – Download the Google AI Edge Gallery app:

    • Android
    • iOS
      See Mobile Actions and Tiny Garden in action.
  • Build your own – Follow the fine‑tuning recipes to adapt FunctionGemma to your app’s logic and customize the AI Edge Gallery with your own functions (guide).

  • Join the conversation – Visit the AI Edge Gallery repository on GitHub to follow progress, report issues, or contribute to the future of on‑device AI use cases.

We can’t wait to see the agentic features you’ll bring to life. Happy coding!

Acknowledgements

Key Contributors

  • Francesco Visin
  • Hriday Chhabria
  • Jiageng Zhang
  • Jing Jin
  • Kat Black
  • Marissa Ikonomidis
  • Matthew Chan
  • Ravin Kumar
  • Rishika Sinha
  • Sahil Dua
  • Xu Chen
  • Na Li
  • Yinghao Sun
  • Yishuang Pang

Additional Team Members

  • Byungchul Kim
  • Deepak Nagaraj Halliyavar
  • Fengwu Yao
  • Jae Yoo
  • Jenn Lee
  • Weiyi Wang
  • Xiaoming Hu
  • Yasir Modak
  • Yi‑Chun Kuo
  • Yu‑hui Chen
  • Zhe Chen

Leadership

  • Cormac Brick
  • Kathleen Kenealy
  • Matthias Grundmann
  • Ram Iyengar
  • Sachin Kotwani
0 views
Back to Blog

Related posts

Read more »

Get ready for Google I/O 2026

Google I/O returns May 19–20 Google I/O is back! Join us online as we share our latest AI breakthroughs and updates in products across the company, from Gemini...