I Built an Open-Source CLI to Compare LLM API Costs in Your Terminal (npx, Zero Install)
Source: Dev.to
If you’ve ever needed to compare costs between GPT‑4o, Claude Sonnet, Gemini, or any other LLM before committing to a model, you know the pain: juggling browser tabs, doing manual math, and relying on outdated blog posts.
llm-costs is a zero‑install CLI that does the heavy lifting instantly, counting tokens with the correct tokenizer and rendering a cost comparison table right in your terminal.
Why I built llm-costs
Every new LLM‑powered project used to start with the same ritual:
- Open the Anthropic pricing page
- Open the OpenAI pricing page
- Open the Google AI pricing page
- Try to compare apples to oranges (different tokenizers)
- Do the math in my head or a spreadsheet
- Realize the reference blog post is months out of date
There had to be a better way.
Quick demo
npx llm-costs "Build a REST API in Python" --compare
The CLI counts your prompt tokens using the actual tokenizer (tiktoken for OpenAI models, character‑based estimation for others) and prints a table such as:
Model Input Cost Output Cost Total
──────────────────────────────────────────────────────
deepseek-chat $0.00003 $0.00008 $0.00011
gemini-flash-2.0 $0.00005 $0.00020 $0.00025
claude-haiku-3-5 $0.00020 $0.00100 $0.00120
gpt-4o-mini $0.00027 $0.00108 $0.00135
claude-sonnet-4-5 $0.00150 $0.00750 $0.00900
gpt-4o $0.00375 $0.01500 $0.01875
Features
Zero install
Run the tool directly with npx or install it globally with npm—no manual setup required.
Multi‑provider support
- 17 models across 6 providers: Anthropic, OpenAI, Google, DeepSeek, Mistral, Cohere.
Auto‑updating prices
- Client‑side: On each run, the CLI checks
~/.llm-costs/pricing.json. If the file is older than 7 days, it fetches fresh data from GitHub (non‑blocking, 5 s timeout). - Server‑side: A GitHub Actions workflow runs every Monday, pulls pricing from LiteLLM’s aggregate JSON, diffs the result, and opens a PR with a markdown table of changes.
Batch processing
Pipe a file of prompts to get total costs:
llm-costs batch prompts.txt
Budget guard
Set a cost ceiling for CI/CD pipelines:
llm-costs guard --max 0.10
Watch mode
Live‑refresh the cost table as you type your prompt.
MCP server mode
Integrate with Claude Desktop or any MCP‑compatible tool.
Price changelog
Track when costs changed:
llm-costs changelog --since 30d
Budget projections
Estimate future spend:
llm-costs budget --requests 10000
Installation & Usage
One‑shot, no install
npx llm-costs "your prompt here"
Global install
npm install -g llm-costs
Compare across all models
npx llm-costs "your prompt" --compare
Check a specific model
npx llm-costs "your prompt" --model claude-sonnet-4-5
Contributing
LLM pricing changes frequently, and the community can help keep llm-costs up to date. PRs to add new models, fix prices, or support additional providers are very welcome.
Links
- GitHub repository: ⭐
- npm package: