Building an AI-Powered CRM Chat Assistant with n8n and OpenRouter

Published: (March 10, 2026 at 06:50 AM EDT)
3 min read
Source: Dev.to

Source: Dev.to

Build a No‑Code AI CRM Assistant in One Day (n8n + Claude + OpenRouter)

What if your team could query a CRM just by asking a question?
Instead of navigating dashboards or exporting CSV files, imagine typing:

“Show me deals closing this month.”

…and getting the answer instantly.

In this article I’ll show how I built a no‑code AI CRM assistant in a single day that lets employees query a CRM using natural language. The entire system runs without writing a single line of application code.

Tech Stack

  • n8n – self‑hosted workflow automation
  • OpenRouter – AI model API gateway
  • CRM – any CRM with a REST API
  • Docker – container runtime
  • Claude Sonnet 4.6 (via OpenRouter) – LLM powering the agent

Architecture

[Employee] → n8n Chat UI → [AI Agent (Claude via OpenRouter)]
                                ├── Tool: Search API
                                └── Tool: Search API

                                 [CRM API]

                        [AI generates response in Japanese]

                          [Employee receives answer]

The key idea is to let the AI agent decide which CRM API to call based on the user’s natural‑language query.

Step 1 — Run n8n with Docker

Spinning up n8n locally takes only one command:

docker run -d \
  --name n8n \
  --restart always \
  -p 5678:5678 \
  -e WEBHOOK_URL=http://YOUR_IP:5678/ \
  -e N8N_EDITOR_BASE_URL=http://YOUR_IP:5678/ \
  -v n8n_data:/home/node/.n8n \
  n8nio/n8n

Key Learnings

  • Authentication has changed – older guides reference N8N_BASIC_AUTH_*, which are now deprecated. n8n now uses email‑based account setup through the web UI on first launch.
  • LAN access requires correct environment variables – without setting WEBHOOK_URL and N8N_EDITOR_BASE_URL, the chat UI may call localhost, causing CORS errors for other users on the network.

Step 2 — Building the AI Agent

Instead of a simple pipeline like:

Chat → HTTP Request → AI

I built a tool‑enabled AI Agent that lets the LLM choose the correct CRM endpoint autonomously.

Workflow structure:

Chat Trigger → AI Agent
                ├── Tool: search
                └── Tool: search

Each tool calls a specific CRM endpoint. A dynamic query example:

{{ $json.query }}

Step 3 — The System Prompt (The Most Important Part)

The system prompt determines whether the agent works reliably. Craft it carefully to:

  • Define the agent’s role and capabilities.
  • List available tools and their expected inputs/outputs.
  • Impose guardrails (e.g., “Never return raw data without summarizing”).

(The exact prompt text is omitted for brevity.)

Step 4 — Sharing on LAN

Find your IP address:

ipconfig getifaddr en0

Share the chat URL with teammates:

http://YOUR_IP:5678/webhook/xxxxx/chat

Enable Basic Auth for security.

Challenges & Lessons Learned

Token Limit Explosion

Querying the CRM without filters returned the entire dataset, resulting in 8 M+ tokens.

Solution

  • Enforce search conditions.
  • Block empty queries.

Cost

ComponentCost
n8n (self‑hosted)$0
OpenRouter (Claude Sonnet)~ $26 / month
Infrastructure$0

Supports roughly 10 users × 5 queries/day.

Conclusion

Using n8n + OpenRouter, I built a working AI CRM assistant in less than a day.
The biggest takeaway: system prompt design is critical. With the right guardrails, you can turn any API‑driven system into a natural‑language interface.

0 views
Back to Blog

Related posts

Read more »