175K+ publicly-exposed Ollama AI instances discovered
Source: Hacker News

Image credit: Getty Images/Surasak Suwanmake
Public Exposure of Ollama Instances
- 175,000 Ollama systems misconfigured, publicly exposed without authentication
- Attackers exploit instances via LLMjacking to generate spam and malware content
- Issue stems from user misconfiguration, fixable by binding to localhost only
Security researchers have identified roughly 175,000 publicly exposed Ollama instances worldwide, putting them at risk of malicious exploitation. Some of these instances are already being abused, so operators should consider reconfiguring their deployments.
SentinelOne SentinelLABS and Censys found that many businesses run AI models locally with Ollama, which is intended to listen only on the host machine. In about 175,000 cases, the service is misconfigured to listen on all network interfaces, making the AI accessible to anyone on the internet without authentication.
LLMjacking
Many of these instances run on home connections, VPS servers, or cloud machines, and roughly half allow “tool calling,” enabling the AI to execute code, call APIs, and interact with other systems. Malicious actors can abuse such instances in an attack known as LLMjacking, using the victim’s electricity, bandwidth, and compute to generate spam, malware, or to resell access to other criminals.
These systems often lack corporate firewalls, monitoring, and authentication, especially when hosted on residential IPs, making them hard to track and easy to exploit. Some also run uncensored models without safety checks, further increasing abuse potential.
The issue is not a software bug but a configuration problem. Ollama defaults to binding only to localhost (127.0.0.1). Users must ensure their instances remain bound to localhost or otherwise protect them with authentication and firewall rules to prevent LLMjacking.
Via The Hacker News