How to Setup Openclaw With LMStudio
Source: Dev.to
Introduction
OpenClaw has generated a lot of buzz, evolving from Clawdbot → Moltbot → OpenClaw. Most tutorials rely on external APIs (OpenAI, Anthropic, Google, etc.), which can become expensive. This guide shows how to run OpenClaw locally using LMStudio on a Linux‑based Lenovo ThinkPad.
Installing LMStudio
- Install LMStudio on your Linux system.
If you need help, a YouTube tutorial can guide you through the installation process.
Selecting a Model
Because of limited hardware resources, a quantized version of GLM‑4.7 Flash was chosen. After downloading the model, LMStudio’s chat interface responded to a simple “hello” in about 50 seconds, which is slow but acceptable for initial testing.
Installing OpenClaw
curl -fsSL https://openclaw.bot/install.sh | bash
During the installation, the manual configuration wizard was used. Some required fields (skills, model provider, token, etc.) were missing, so the configuration file needed manual editing.
Editing openclaw.json
Open ~/.openclaw/openclaw.json (or the path shown by the installer) and add the following sections. Adjust paths and values as needed for your environment.
{
"meta": {
"lastTouchedVersion": "2026.1.29",
"lastTouchedAt": "2026-01-31T02:01:52.403Z"
},
"wizard": {
"lastRunAt": "2026-01-31T02:01:52.399Z",
"lastRunVersion": "2026.1.29",
"lastRunCommand": "onboard",
"lastRunMode": "local"
},
"models": {
"providers": {
"lmstudio": {
"baseUrl": "http://127.0.0.1:1234/v1",
"apiKey": "lm-studio",
"api": "openai-responses",
"models": [
{
"id": "glm-4.7-flash",
"name": "GLM-4.7 Flash",
"reasoning": true,
"input": ["text"],
"cost": {
"input": 0,
"output": 0
},
"contextWindow": 20000,
"maxTokens": 8192
}
]
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "lmstudio/glm-4.7-flash"
},
"workspace": "/home/Ubuntu/.openclaw/workspace",
"compaction": {
"mode": "safeguard"
},
"maxConcurrent": 4,
"subagents": {
"maxConcurrent": 8
}
}
},
"messages": {
"ackReactionScope": "group-mentions"
},
"commands": {
"native": "auto",
"nativeSkills": "auto"
},
"hooks": {
"internal": {
"enabled": true,
"entries": {
"session-memory": {
"enabled": true
}
}
}
},
"gateway": {
"port": 18789,
"bind": "loopback",
"mode": "local",
"auth": {
"mode": "token",
"token": "generate-your-token"
},
"tailscale": {
"mode": "off",
"resetOnExit": false
}
},
"skills": {
"install": {
"nodeManager": "npm"
}
}
}
Generating a Token
Create a token for the gateway authentication:
openssl rand -hex 20
Replace "generate-your-token" in the gateway.auth.token field with the generated value.
Verifying the Installation
Run the setup verification command:
openclaw setup
Expected output:
Config OK: ~/.openclaw/openclaw.json
Workspace OK: ~/.openclaw/workspace
Sessions: OK: ~/.openclaw/agents/main/sessions
Starting the Gateway
Check the gateway status:
openclaw gateway status
You should see a line similar to:
Listening: 127.0.0.1:18789
This confirms that OpenClaw is listening locally on port 18789.
Next Steps
At this point OpenClaw is installed and reachable via the local gateway. Future work includes:
- Interacting with the bot through the OpenClaw CLI or a compatible client.
- Adding custom skills or agents.
- Monitoring performance and adjusting model parameters as needed.
Stress‑Testing AI Agents (Optional)
If you develop AI agents for internal or commercial use, you can perform a quick stress test with the Zeroshot tool:
zeroshot scan --target-url https://your-target-url --max-attacks 20
The tool can run up to 50 attacks (or more) using a library of 1,000+ attack vectors across various AI system categories. Visit the Zeroshot website for a free trial.