OpenClaw Quickstart: Install with Docker (Ollama GPU or Claude + CPU)

Published: (March 3, 2026 at 06:41 AM EST)
3 min read
Source: Dev.to

Source: Dev.to

Introduction

OpenClaw is a self‑hosted AI assistant that can run with local LLM runtimes such as Ollama or with cloud‑based models like Claude Sonnet 4.6 via the Anthropic API. This quick‑start guide shows how to deploy OpenClaw with Docker, configure either a GPU‑powered local model or a CPU‑only cloud model, and verify that the assistant is working end‑to‑end.

The goal is simple:

  1. Get OpenClaw running.
  2. Send a request.
  3. Confirm that it works.

This is not a production hardening guide.

You have two options:

  • Path A – Local GPU using Ollama (recommended if you have a compatible GPU)
  • **Path B – CPU‑only using Claude Sonnet 4.6 via the Anthropic API

Both paths share the same core installation steps.

Prerequisites

RequirementDetails
Gitgit command line
Docker Desktop (or Docker + Docker Compose)Docker Engine ≥ 20.10
TerminalBash, Zsh, PowerShell, etc.
GPU (optional)NVIDIA or AMD, with drivers installed
Ollama (optional)Installed on the host if using Path A
Anthropic API keyRequired for Path B

Installation

# Clone the repository
git clone https://github.com/openclaw/openclaw.git
cd openclaw

# Copy the example environment file
cp .env.example .env

Edit the .env file (see the sections below for the specific variables you need).

Start the containers:

docker compose up -d

Verify that the containers are running:

docker ps

At this point OpenClaw is up, but it is not yet connected to an LLM provider.

Configure the LLM Provider

Path A – Local GPU with Ollama

  1. Install Ollama (if not already installed):

    curl -fsSL https://ollama.com/install.sh | sh
  2. Pull and test a model (e.g., llama3):

    ollama pull llama3
    ollama run llama3   # should return a response
  3. Update .env:

    LLM_PROVIDER=ollama
    OLLAMA_BASE_URL=http://host.docker.internal:11434
    OLLAMA_MODEL=llama3
  4. Restart the containers:

    docker compose restart

OpenClaw will now route inference requests to the local Ollama instance.

For more details on Ollama installation, model storage locations, and CLI commands, see the official docs:
[Install Ollama and Configure Models Location]
[Ollama CLI Cheatsheet (2026 update)]

Path B – CPU‑only with Claude Sonnet 4.6

  1. Obtain an Anthropic API key from the Anthropic console.

  2. Update .env:

    LLM_PROVIDER=anthropic
    ANTHROPIC_API_KEY=your_api_key_here
    ANTHROPIC_MODEL=claude-sonnet-4-6
  3. Restart the containers:

    docker compose restart

OpenClaw will now use Claude Sonnet 4.6 for inference, which works well on machines without a GPU.

Verify the Setup

Health check

curl http://localhost:3000/health

You should receive a JSON response indicating a healthy status.

Simple chat test

curl -X POST http://localhost:3000/chat \
     -H "Content-Type: application/json" \
     -d '{"message": "Explain what OpenClaw does in simple terms."}'

A structured response confirms that the request‑response loop is functional.

Debugging tips

  • Check container logs

    docker compose logs
  • Confirm Ollama is running (GPU path)

    ollama list
  • Verify environment variables – ensure OLLAMA_BASE_URL or ANTHROPIC_API_KEY are correct.

  • GPU not being used?

    • Confirm GPU drivers are installed on the host.
    • Ensure Docker has GPU access enabled (e.g., --gpus all flag or Docker Desktop GPU settings).

Next Steps

  • Connect messaging platforms (Slack, Discord, etc.).
  • Enable document retrieval and knowledge bases.
  • Experiment with routing strategies and tool usage.
  • Add observability, metrics, and logging.
  • Tune performance and cost settings for your chosen LLM provider.

Getting OpenClaw operational is the first step; from here you can explore the richer architectural features and build sophisticated AI‑driven workflows.

0 views
Back to Blog

Related posts

Read more »

Google Gemini Writing Challenge

What I Built - Where Gemini fit in - Used Gemini’s multimodal capabilities to let users upload screenshots of notes, diagrams, or code snippets. - Gemini gener...