Building an Open Source Android Voice Assistant with Kotlin
Source: Dev.to
Replace Google Assistant with Your Own AI
What if you could long‑press your Home button and talk to YOUR AI instead of Google’s? I built OpenClaw Assistant – an open‑source Android app that does exactly that.
- 📹 Demo:
- 🔗 GitHub:
Features
- 🏠 System Assistant Integration – long‑press Home to activate
- 🎤 Custom Wake Words – “Jarvis”, “Computer”, or any phrase you choose
- 📴 Offline Wake Word Detection – powered by Vosk, no cloud required
- 🔊 Voice I/O – speech recognition + TTS
- 🔗 Backend‑agnostic – connect to Ollama, OpenAI, Claude, or any custom API
Architecture Overview
| Component | Technology |
|---|---|
| UI | Kotlin + Jetpack Compose + Material 3 |
| System Hook | VoiceInteractionService |
| Wake Word | Vosk (offline) |
| Speech | Android SpeechRecognizer + TTS |
| Network | OkHttp + Gson |
- The app registers as Android’s digital assistant.
- Vosk listens for wake words locally.
- On activation, speech is transcribed and sent to your webhook.
- The response is spoken via TTS.
Installation
git clone https://github.com/yuga-hashimoto/OpenClawAssistant
cd OpenClawAssistant
./gradlew assembleDebug
Or download the APK from the Releases page on GitHub.
Webhook API Contract
Your backend must accept a POST request:
POST /your-endpoint
Content-Type: application/json
{
"message": "user's speech",
"session_id": "..."
}
and respond with:
{
"response": "AI's reply"
}
Contributing
Contributions are welcome! Feel free to open issues or submit pull requests on the GitHub repository.