How I cut my Cursor/Claude token usage by 90% with a custom 'Dehydrator' tool matrix 🛡️
Source: Dev.to
Introduction
Hey fellow AI‑native devs! 👋
Lately I’ve been feeling the pain of “Context Window Full” and escalating API bills while using Cursor and Claude Code. I realized that about 80 % of what we feed into the AI is just “token slop”—massive JSDocs, redundant logs, and implementation fluff that the LLM doesn’t actually need to understand the core logic.
TokenCount Overview
I built TokenCount (and the JustinXai Matrix) as a suite of local‑first tools designed to dehydrate your codebase before the AI reads it.
Features
- CLI (
@xdongzi/ai-context-bundler): Dehydrate entire repositories in seconds. - VS Code Extension: A live token skimmer in your sidebar.
- MDC Generator: Instantly generate structured
.cursorrulesfrom snippets.
All components run locally—no servers, no tracking, just efficient context handling.
Results
Running TokenCount on a heavy React component:
| Before | After | Savings |
|---|---|---|
| 1,248 tokens (bloated with boilerplate) | 12 tokens (pure semantic skeleton) | 92 % reduction 🤯 |
Launch Information
- The project launches today on Product Hunt.
- Early‑bird Pro Pass is currently 50 % off.
Support the launch (launching in 4 hours):
https://www.producthunt.com/products/tokencount-context-bundler
Call to Action
I’d love to hear how you manage your context bloat. What’s your record for saving tokens? Let me know in the comments!