I Built a Cursor Plugin to Track My Team's AI Spend From the IDE

Published: (February 19, 2026 at 05:17 PM EST)
5 min read
Source: Dev.to

Source: Dev.to

A Cursor MCP server for enterprise usage and spending data

cursor‑usage is a Cursor plugin (also works with Claude Code) that exposes the full Cursor Enterprise Admin and Analytics APIs through the Model Context Protocol (MCP).
If you’re not familiar with MCP, it’s the standard that lets AI agents call external tools. In this case, the “tools” are your team’s spending and usage data.

You install the plugin, set your API key, and start asking questions in natural language.

What’s inside

  • MCP server – 15 tools covering team members, spending, daily usage, billing groups, per‑request events, DAU, model adoption, agent edits, tabs, MCP usage, and more.
  • Two skills – teach the agent how to interpret Cursor Enterprise data correctly and how to optimise AI costs.
  • Commands – quick‑access shortcuts:
    • /usage-report
    • /spend-check
    • /model-audit
  • Composite tools – e.g. get_team_overview and get_user_deep_dive combine multiple API calls into a single useful answer.

Installation (one‑liner)

/add-plugin cursor-usage

Manual MCP‑server setup (optional)

Add the following to your Cursor config:

{
  "mcpServers": {
    "cursor-usage": {
      "command": "npx",
      "args": ["-y", "cursor-usage-mcp"],
      "env": {
        "CURSOR_API_KEY": "your-api-key-here"
      }
    }
  }
}

Understanding the Cursor Enterprise API gotchas

Anyone can call the REST endpoints – the real work is figuring out what the numbers actually mean. Below are the most common pitfalls I ran into.

GotchaWhy it mattersCorrect interpretation
totalLinesAdded is not an AI productivity metricIt mixes manual edits, tab completions, and agent‑generated code.Use acceptedLinesAdded for AI‑generated code, but remember that auto‑applied changes in agent mode don’t appear in the acceptance count.
spendCents includes the subscription amountA user may show $50 spend with $40 includedSpendCents; only $10 is actual overage.Subtract includedSpendCents from spendCents to get true over‑usage.
Model switches can 10× daily spendPremium models (Opus, GPT‑5) cost 10‑50× more per request than standard models (Sonnet, GPT‑4o).Spot users who switched models; a single day on Opus can look like an anomaly.
Spend limits are often misunderstoodSetting a hard limit of $0 means “no overage allowed,” not “no usage allowed.”Users can still consume their included allocation.
Acceptance‑rate formula is misleadingacceptedLinesAdded / totalLinesAdded counts manual edits in the denominator.Use totalAccepts / totalApplies for a true AI acceptance rate (healthy teams: 40‑70%).

These gotchas are encoded in the plugin’s skills, so the agent automatically accounts for them before looking at raw data. When someone asks “Are we getting value from AI?” the agent checks acceptance rates and cross‑references model costs instead of just reporting line counts.


Architecture: why a local MCP server, not a hosted service

DecisionReason
STDIO MCP (not HTTP)Runs locally on the user’s machine; no data leaves the network. You provide your own API key, keeping sensitive spend and usage patterns private.
Zod validation on all inputsEvery tool validates arguments with Zod schemas (ISO dates, valid emails, bounded page sizes). Prevents garbage being sent to the API.
Composite tools for common queriesget_team_overview bundles four endpoints into one call; get_user_deep_dive does the same for a single user. Saves tokens and reduces agent confusion.
Cross‑platform from day oneWorks as a Cursor Marketplace plugin, a Claude Code plugin, or a standalone npx server. The MCP server is identical regardless of client.

From quick questions to a full AI‑cost‑management dashboard

The plugin shines for ad‑hoc queries, but the Cursor API only retains analytics for the last 30 days, and it can’t:

  • Perform long‑term anomaly detection in a chat window
  • Set up automated alerts

For those needs, check out cursor‑usage‑tracker – an open‑source dashboard that adds:

  • Automated data collection
  • Three‑layer anomaly detection (thresholds, z‑score, trend analysis)
  • Slack & email alerts
  • Incident lifecycle tracking (MTTD/MTTI/MTTR)
  • Web UI with charts

The plugin is the quick entry point; the dashboard handles the heavy lifting when you outgrow the plugin’s limits. They can be used together or separately.


Getting started with cursor‑usage

  1. Install

    /add-plugin cursor-usage

    (or use the manual MCP setup shown above)

  2. Configure – set your Cursor Enterprise Admin API key (CURSOR_API_KEY).

  3. Ask – start querying in natural language, e.g.

    • “Why did our spend jump 40 % last Tuesday?”
    • “Who switched to Opus and tripled their daily cost?”

Enjoy instant, context‑aware answers to your AI‑spend questions—right inside the IDE.

- **Ask questions:**  
  - “How much did my team spend this week?”  
  - “Who’s using the most expensive models?”  
  - “Run a model audit”

  The skills and commands are included automatically with the plugin install.

## What’s next

- Submitted to the Cursor Marketplace  
- Planning to submit the analysis skill to `anthropics/skills`  
- Adding more composite tools based on feedback  
- Exploring the newer per‑user analytics endpoints  

If you’re managing a Cursor Enterprise team and want to try it out, I’d love to hear what queries you find most useful.

**Links**

- **Plugin:**   
- **npm package:**   
- **Dashboard:** 
0 views
Back to Blog

Related posts

Read more »

Apex B. OpenClaw, Local Embeddings.

Local Embeddings para Private Memory Search Por default, el memory search de OpenClaw envía texto a un embedding API externo típicamente Anthropic u OpenAI par...