How I cut my Cursor/Claude token usage by 90% with a custom 'Dehydrator' tool matrix 🛡️

Published: (April 20, 2026 at 11:08 PM EDT)
2 min read
Source: Dev.to

Source: Dev.to

Introduction

Hey fellow AI‑native devs! 👋
Lately I’ve been feeling the pain of “Context Window Full” and escalating API bills while using Cursor and Claude Code. I realized that about 80 % of what we feed into the AI is just “token slop”—massive JSDocs, redundant logs, and implementation fluff that the LLM doesn’t actually need to understand the core logic.

TokenCount Overview

I built TokenCount (and the JustinXai Matrix) as a suite of local‑first tools designed to dehydrate your codebase before the AI reads it.

Features

  • CLI (@xdongzi/ai-context-bundler): Dehydrate entire repositories in seconds.
  • VS Code Extension: A live token skimmer in your sidebar.
  • MDC Generator: Instantly generate structured .cursorrules from snippets.

All components run locally—no servers, no tracking, just efficient context handling.

Results

Running TokenCount on a heavy React component:

BeforeAfterSavings
1,248 tokens (bloated with boilerplate)12 tokens (pure semantic skeleton)92 % reduction 🤯

Launch Information

  • The project launches today on Product Hunt.
  • Early‑bird Pro Pass is currently 50 % off.

Support the launch (launching in 4 hours):
https://www.producthunt.com/products/tokencount-context-bundler

Call to Action

I’d love to hear how you manage your context bloat. What’s your record for saving tokens? Let me know in the comments!

0 views
Back to Blog

Related posts

Read more »