I built a local AI Virtual Assistant (JARVIS inspired) using Python, PyQt6 and Ollama.
Source: Dev.to
How it works
- Brain: Uses Ollama as the backend, allowing you to run models like Llama 3, Mistral, or Phi‑3 locally.
- Interface: Built with PyQt6 featuring a “holographic” glassmorphism effect (transparent and sleek).
- Memory: Persistent local memory system to remember previous interactions.
- Voice: Integrated with Piper for realistic text‑to‑speech.
Why local?
I wanted to prove that you don’t need OpenAI or Google to have a functional assistant. This runs entirely on your hardware.
Source Code & Setup
The repository is public, and a full guide on how to set it up is provided (it’s very easy!).
Check it out here: https://github.com/Jm7997/JARVIS
I’m still a student/learning, so I’d really appreciate any feedback, feature ideas, or even a star on GitHub if you find it cool!
What features should I add next? (I’m thinking about Spotify integration or home automation).
Background
I wanted to build a JARVIS‑like assistant that works completely offline to learn more about integrating LLMs with Python and creating transparent UIs with PyQt6.
What it does
It provides a holographic‑style desktop interface to chat with local AI models via Ollama, including persistent conversation memory and text‑to‑speech.
Target Audience
Anyone interested in local AI, desktop automation, or learning how to use PyQt6 for modern‑looking Python applications.
Comparison
Unlike other cloud‑based assistants, this is 100 % private and runs on your own hardware without subscription fees or API keys.