Run local LLMs in under 5 minutes using Nanocl
Source: Dev.to

Easily deploy and manage your own AI‑powered ChatGPT website using Nanocl, Ollama, and Open WebUI.
Overview
This guide will show you how to self‑host an AI model using Nanocl, a lightweight container‑orchestration platform. By combining Nanocl with Ollama (for running large language models locally) and Open WebUI (for a user‑friendly web interface), you can quickly set up your own private ChatGPT‑like service.
📺 Watch the YouTube video tutorial
Stack Components
- Nanocl – Simple, efficient container orchestration for easy deployment and scaling.
- Ollama – Run large language models locally via a powerful API.
- Open WebUI – Modern web interface to interact with your AI model.
Prerequisites
Before you begin, ensure you have the following installed:
-
Docker – Install Docker by following the official guide for your Linux distribution.
-
Nanocl – Install the Nanocl CLI:
curl -fsSL https://download.next-hat.com/scripts/get-nanocl.sh | shSet up Nanocl’s group and internal services:
sudo groupadd nanocl sudo usermod -aG nanocl $USER newgrp nanocl nanocl installFor more details, see the Nanocl documentation.
-
(Optional) Nvidia Container Toolkit – If you want GPU acceleration, follow the Nvidia container toolkit installation guide.
Step 1 – Deploy Ollama with Nanocl
Create a file named ollama.Statefile.yml:
ApiVersion: v0.17
Cargoes:
- Name: ollama
Container:
Image: docker.io/ollama/ollama:latest
Hostname: ollama.local
HostConfig:
Binds:
- ollama:/root/.ollama # Persist Ollama data
Runtime: nvidia # Enable GPU support (optional)
DeviceRequests:
- Driver: nvidia
Count: -1
Capabilities: [[gpu]]
Deploy Ollama:
nanocl apply -s ollama.Statefile.yml
Step 2 – Deploy Open WebUI with Nanocl
Create a file named openwebui.Statefile.yml:
ApiVersion: v0.17
Cargoes:
- Name: open-webui
Container:
Image: ghcr.io/open-webui/open-webui:main
Hostname: open-webui.local
Env:
- OLLAMA_BASE_URL=http://ollama.local:11434 # Connect to Ollama
HostConfig:
Binds:
- open-webui:/app/backend/data # Persist WebUI data
Resources:
- Name: open-webui.local
Kind: ncproxy.io/rule
Data:
Rules:
- Domain: open-webui.local
Network: All
Locations:
- Path: /
Version: 1.1
Headers:
- Upgrade $http_upgrade
- Connection "Upgrade"
Target:
Key: open-webui.global.c
Port: 8080
Deploy Open WebUI:
nanocl apply -s openwebui.Statefile.yml
It will take a bit of time for Open WebUI to start up as it downloads necessary components. You can monitor the progress with:
nanocl cargo logs open-webui -f
Step 3 – Access Open WebUI
Add the following line to your /etc/hosts file to map the domain:
127.0.0.1 open-webui.local
Now open your browser and go to . You should see the Open WebUI welcome screen.
1️⃣ Create Your Admin Account
Click Get Started, fill in your details, and click Create Admin Account.


2️⃣ Download a Model
After logging in, click your avatar (top‑right) → Admin Panel → Settings → Models. Click the download icon in the top‑right corner, select a model (e.g., gemma2:2b), and click Download.
[](https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvgo6hvb0nqi4z4orj4uy.png)
Wait for the download to complete. The model will appear in your list of available models.
3️⃣ Start Chatting
Once the model is ready, create a new chat and say “Hi” to your AI model!
And that’s it! You now have your own self‑hosted AI model running with Nanocl, Ollama, and Open WebUI.

