Emulating the Claude Code Backend for Databricks LLM Models(with MCP, Git Tools, and Prompt Caching)

Published: (December 4, 2025 at 01:40 AM EST)
3 min read
Source: Dev.to

Source: Dev.to

Introduction

Claude Code has quickly become one of my favorite tools for repo‑aware AI workflows. It understands your codebase, navigates files, summarizes diffs, runs tools, and integrates with Git—all through a simple CLI.

The problem: the official Claude Code backend only works with Anthropic‑hosted models. If you want to use Databricks‑hosted Claude models, route requests through Azure’s Anthropic endpoint, add local tools and Model Context Protocol (MCP) servers, enable prompt caching, or run your own backend for experimentation, you’re out of luck.

Solution: Lynkr, a self‑hosted Claude Code‑compatible proxy.
GitHub:

What Lynkr Does

Lynkr is an HTTP proxy that:

  • Emulates the Claude Code backend.
  • Forwards requests to Databricks or Azure Anthropic.
  • Wires in workspace tools, Git helpers, prompt caching, and MCP servers.

You keep using the regular Claude Code CLI, but point it at your own backend:

Claude Code CLI → Lynkr → Databricks / Azure Anthropic / MCP tools

Core Features

Provider Adapters

  • Built‑in support for two upstream providers: Databricks (default) and Azure Anthropic.
  • Requests are normalized so the CLI receives standard Claude‑style responses.

Repo Intelligence

  • Builds a lightweight SQLite index of your workspace, capturing:
    • Symbol definitions & references
    • Framework & dependency hints
    • Language mix detection
    • Lint/test config discovery
  • Generates a CLAUDE.md summary that gives the model structured context about the project.

Git Workflow Integration

  • Git helpers similar to Claude Code:
    • status, diff, stage, commit, push, pull
    • Diff review summaries
    • Release‑note generation
  • Policy guards (environment variables):
    • POLICY_GIT_ALLOW_PUSH
    • POLICY_GIT_REQUIRE_TESTS
    • POLICY_GIT_TEST_COMMAND

Prompt Caching

  • Local LRU+TTL cache keyed by prompt signature:
    • Speeds up repeated prompts
    • Reduces Databricks/Azure token usage
    • Avoids re‑running identical analysis steps
  • Tool‑invoking automatically bypasses the cache to prevent unsafe side effects.

MCP Orchestration

  • Automatically discovers MCP manifests, launches servers, wraps them with JSON‑RPC, and exposes all tools back to the assistant.
  • Optional Docker sandboxing isolates MCP tools when needed.

Workspace Tools

  • Repo indexing & symbol search
  • Diff review
  • Test runner
  • File I/O utilities
  • Lightweight task tracker (TODOs stored in SQLite)

Full Transparency

  • All activity is logged with structured Pino logs, including:
    • Request/response traces
    • Repo indexer events
    • Prompt cache hits/misses
    • MCP registry diagnostics

Architecture Overview

The codebase is intentionally small and hackable—everything lives under src/.

Installing Lynkr

Prerequisites

  • Node.js 18+
  • npm
  • Databricks or Azure Anthropic credentials
  • (Optional) Docker for MCP sandboxing
  • (Optional) Claude Code CLI

Installation options

# From npm
npm install -g lynkr

# Homebrew
brew tap vishalveerareddy123/lynkr
brew install lynkr

# From source
git clone https://github.com/vishalveerareddy123/Lynkr.git
cd Lynkr
npm install

Configuring the Proxy

Set the appropriate environment variables for the upstream provider (Databricks or Azure Anthropic) and any policy flags you need. Example for Azure Anthropic:

export ANTHROPIC_API_KEY=your_key_here
export ANTHROPIC_ENDPOINT=https://anthropic.azure.com/anthropic/v1/messages
export PROVIDER=azure_anthropic

Hooking Up Claude Code CLI

Run Lynkr locally (default port 8080) and point Claude Code at it:

lynkr start --port 8080

Configure Claude Code to use the proxy, e.g.:

export CLAUDE_CODE_ENDPOINT=http://localhost:8080

Then run the CLI inside your repository as usual; all tool calls, chat, diffs, and navigation flow through Lynkr.

Example: Calling a Tool

curl -X POST http://localhost:8080/v1/messages \
  -H "Content-Type: application/json" \
  -d '{
        "model": "claude-2.1",
        "messages": [{"role":"user","content":"Summarize the recent diff"}]
      }'

Troubleshooting Highlights

SymptomLikely CauseFix
Missing pathIncorrect tool argumentsVerify the file/path you pass to the tool
Git commands blockedPOLICY_GIT_ALLOW_PUSH not setExport POLICY_GIT_ALLOW_PUSH=true
MCP server not discoveredManifest location missingEnsure MCP manifest files are in the workspace root or configured path
Prompt cache not workingTools used in requestTools automatically bypass the cache; remove tool calls to test caching
Web fetch returns HTML scaffoldingJS execution not supportedUse JSON APIs instead of HTML pages

Roadmap

  • Per‑file threaded diff comments
  • Risk scoring on diffs
  • LSP bridging for deeper language understanding
  • Declarative “skills” layer
  • Historical coverage and test dashboards

Why I Built This

I love the Claude Code UX, but needed:

  • Full local execution
  • Ability to plug in Databricks and Azure Anthropic
  • Custom tools and MCP servers
  • Full visibility into internal behavior
  • Rapid experimentation without platform constraints

If you’re exploring AI‑assisted development on Databricks or Azure and want more control over your backend, Lynkr may be useful.

GitHub:
Contributions, ideas, and issues are welcome.

Back to Blog

Related posts

Read more »