Configure Local LLM with OpenCode

Published: (January 16, 2026 at 01:49 PM EST)
2 min read
Source: Dev.to

Source: Dev.to

Adding a custom OpenAI‑compatible endpoint to OpenCode

OpenCode does not currently expose a simple “bring your own endpoint” option in its UI. Instead, it ships with a predefined list of cloud providers.
OpenCode fully supports OpenAI‑compatible APIs, which means you can plug in any compatible endpoint (e.g., vLLM, LM Studio, Ollama with a proxy, or your own custom server).

This guide shows how to wire up a local vLLM server as a provider; the same approach works for any OpenAI‑compatible endpoint.

Prerequisites

  • OpenCode installed and running
  • A running OpenAI‑compatible endpoint (e.g., a local vLLM server at http://:8000/v1)

vLLM exposes a /v1 API that matches OpenAI’s Chat Completions API, making it an ideal drop‑in backend.

1. Store provider authentication details

OpenCode keeps authentication data in ~/.local/share/opencode/auth.json.

If the file does not exist, create it and add the following entry:

{
  "vllm": {
    "type": "api",
    "key": "sk-local"
  }
}
  • vLLM does not require an API key, but OpenCode expects one to be present. Any placeholder value works (e.g., sk-local).
  • If auth.json already exists, merge the vllm block into the existing JSON.

2. Define the provider configuration

Create (or edit) ~/.config/opencode/opencode.json:

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "vllm": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "vLLM (local)",
      "options": {
        "baseURL": "http://100.108.174.26:8000/v1"
      },
      "models": {
        "Qwen3-Coder-30B-A3B-Instruct": {
          "name": "My vLLM model"
        }
      }
    }
  },
  "model": "vllm/Qwen3-Coder-30B-A3B-Instruct",
  "small_model": "vllm/Qwen3-Coder-30B-A3B-Instruct"
}

Key fields

  • npm@ai-sdk/openai-compatible tells OpenCode to treat this provider as OpenAI‑compatible.
  • options.baseURL – Must point to the /v1 endpoint of your server.
  • models – The key must exactly match the model ID exposed by the backend.
  • model / small_model – Set the default model used by OpenCode.

3. Restart OpenCode

If OpenCode is already running, restart it to load the new configuration.

4. Use the custom provider

You can now select your custom provider and model via the /model command (or the UI selection list). The entry “vLLM (local) – My vLLM model” will appear among the available options.

Back to Blog

Related posts

Read more »