Vibe coding mobile apps with Compose Driver

Published: (February 6, 2026 at 12:54 PM EST)
3 min read
Source: Dev.to

Source: Dev.to

What is it?

Compose Driver is a library and Gradle plugin that lets AI agents drive your Jetpack Compose app. It works by wrapping your UI in a test harness that listens for HTTP requests.

This means you can have an AI agent:

  • Query the UI to see what buttons or text are on the screen.
  • Interact by clicking, swiping, or typing.
  • Verify the result by printing the UI tree, taking a screenshot, or recording a GIF.

It runs on the JVM headlessly, so it’s fast and can run anywhere, making it perfect for background or cloud agents. It supports both Desktop/Multiplatform Compose and Android Jetpack Compose (via Robolectric).

How it works

The core implementation is under 300 lines. It starts a small local server that translates HTTP requests into standard ComposeUiTest actions.

runComposeUiTest { uiTest ->
    // Set the application content.
    // There is also a /reset endpoint to change this content at runtime.
    uiTest.setContent { MyApplication() }

    startServer {
        get("/click") { request ->
            val matcher = request.node()          // Find the node (e.g. by tag or text)
            uiTest.onNode(matcher).performClick() // Perform the click
            uiTest.waitForIdle()                  // Wait for animations to settle (advancing the virtual clock time)
            request.respondText("ok")             // Tell the agent it's done
        }
        get("/screenshot") { request ->
            val node = request.node()
            val image = uiTest.onNode(node).captureToImage()
            request.respondPng(image)
        }
        // …
    }
}

This simple loop lets the agent navigate the app just like a user would, but much faster because it can use a virtual clock to speed up animations.

The approach relies on Jetpack Compose’s powerful testing APIs, which provide fine‑grained control over the UI clock and input injection while remaining decoupled from the rendering platform.

Playing with it

To demonstrate, I built a simple Instagram‑style clone. Prompting the agent with “Build an Instagram UI clone” or “Improve the app and add missing features” resulted in the agent navigating menus, clicking buttons, and adding five new screens in a single run. The prototype was created in under an hour with about ten prompts.

Here’s a short video of it in action:

(video embed omitted for brevity)

A word on reliability

Compose Driver doesn’t magically solve all challenges of building mobile apps with AI. For production workloads, reviewing the generated code and understanding the agent’s decisions remains essential. However, for hobby projects, prototypes, or experimentation, it provides a fast feedback loop that can significantly improve the development experience.

Providing AI agents with tools to verify their work tends to improve the quality of generated code, as they can close the feedback loop and iterate autonomously.

Check it out

If you’re interested in agentic workflows for mobile or just want to experiment, the code is open source:

https://github.com/jdemeulenaere/compose-driver

The repository includes a sample/ project that you can open in your favorite AI‑enabled editor to get started quickly. I hope it’s useful—let me know what you think!

Back to Blog

Related posts

Read more »