OpenClaw Quickstart: Install with Docker (Ollama GPU or Claude + CPU)
Source: Dev.to
Introduction
OpenClaw is a self‑hosted AI assistant that can run with local LLM runtimes such as Ollama or with cloud‑based models like Claude Sonnet 4.6 via the Anthropic API. This quick‑start guide shows how to deploy OpenClaw with Docker, configure either a GPU‑powered local model or a CPU‑only cloud model, and verify that the assistant is working end‑to‑end.
The goal is simple:
- Get OpenClaw running.
- Send a request.
- Confirm that it works.
This is not a production hardening guide.
You have two options:
- Path A – Local GPU using Ollama (recommended if you have a compatible GPU)
- **Path B – CPU‑only using Claude Sonnet 4.6 via the Anthropic API
Both paths share the same core installation steps.
Prerequisites
| Requirement | Details |
|---|---|
| Git | git command line |
| Docker Desktop (or Docker + Docker Compose) | Docker Engine ≥ 20.10 |
| Terminal | Bash, Zsh, PowerShell, etc. |
| GPU (optional) | NVIDIA or AMD, with drivers installed |
| Ollama (optional) | Installed on the host if using Path A |
| Anthropic API key | Required for Path B |
Installation
# Clone the repository
git clone https://github.com/openclaw/openclaw.git
cd openclaw
# Copy the example environment file
cp .env.example .env
Edit the .env file (see the sections below for the specific variables you need).
Start the containers:
docker compose up -d
Verify that the containers are running:
docker ps
At this point OpenClaw is up, but it is not yet connected to an LLM provider.
Configure the LLM Provider
Path A – Local GPU with Ollama
-
Install Ollama (if not already installed):
curl -fsSL https://ollama.com/install.sh | sh -
Pull and test a model (e.g.,
llama3):ollama pull llama3 ollama run llama3 # should return a response -
Update
.env:LLM_PROVIDER=ollama OLLAMA_BASE_URL=http://host.docker.internal:11434 OLLAMA_MODEL=llama3 -
Restart the containers:
docker compose restart
OpenClaw will now route inference requests to the local Ollama instance.
For more details on Ollama installation, model storage locations, and CLI commands, see the official docs:
[Install Ollama and Configure Models Location]
[Ollama CLI Cheatsheet (2026 update)]
Path B – CPU‑only with Claude Sonnet 4.6
-
Obtain an Anthropic API key from the Anthropic console.
-
Update
.env:LLM_PROVIDER=anthropic ANTHROPIC_API_KEY=your_api_key_here ANTHROPIC_MODEL=claude-sonnet-4-6 -
Restart the containers:
docker compose restart
OpenClaw will now use Claude Sonnet 4.6 for inference, which works well on machines without a GPU.
Verify the Setup
Health check
curl http://localhost:3000/health
You should receive a JSON response indicating a healthy status.
Simple chat test
curl -X POST http://localhost:3000/chat \
-H "Content-Type: application/json" \
-d '{"message": "Explain what OpenClaw does in simple terms."}'
A structured response confirms that the request‑response loop is functional.
Debugging tips
-
Check container logs
docker compose logs -
Confirm Ollama is running (GPU path)
ollama list -
Verify environment variables – ensure
OLLAMA_BASE_URLorANTHROPIC_API_KEYare correct. -
GPU not being used?
- Confirm GPU drivers are installed on the host.
- Ensure Docker has GPU access enabled (e.g.,
--gpus allflag or Docker Desktop GPU settings).
Next Steps
- Connect messaging platforms (Slack, Discord, etc.).
- Enable document retrieval and knowledge bases.
- Experiment with routing strategies and tool usage.
- Add observability, metrics, and logging.
- Tune performance and cost settings for your chosen LLM provider.
Getting OpenClaw operational is the first step; from here you can explore the richer architectural features and build sophisticated AI‑driven workflows.