AI-Radar.it
Source: Dev.to

Overview
Hi all, happy to join the community.
As an AI enthusiast, I’ve been studying how large language models (LLMs) work, reading many books, and experimenting with Google Colab, Hugging Face, and Kaggle. I’ve built several AI‑related applications using VS Code and Google Antigravity. This post introduces my small contribution, ai‑radar.it, a news aggregator built entirely with VS Code and Antigravity.
Key Features
- Python‑focused content – most articles are Python‑related and make extensive use of Hugging Face libraries.
- Local LLM support – runs a locally hosted Ollama 3B model for certain tasks.
- GenAI integration – other AI‑related tasks are handled via generative AI services.
- RAG with Chromadb – provides contextual answers through an internal chatbot called Ask Observatory.
- LLM calculator – evaluates whether your hardware (currently only RAM is considered) can run an LLM on‑premise and suggests suitable models.
Technical Details
- Development environment: VS Code + Google Antigravity.
- Model serving: Ollama 3B model running locally.
- Vector store: Chromadb for Retrieval‑Augmented Generation (RAG).
- Chatbot: Named Ask Observatory, leverages the local model and vector store to answer queries.
- Hardware check: Simple calculator that uses available RAM to determine feasibility of on‑premise LLM deployment.
Screenshot

Getting Started
Feel free to explore the project if you’re interested.
— Davide