How to better your Claude CoWork experience with MCPs
Source: Dev.to
Claude CoWork Overview
When everyone was busy talking about how good Claude Code is, Anthropic launched Claude CoWork – essentially Claude Code with a much less intimidating interface for automating “fake‑email” jobs.
- Capabilities
- Access to your local file system, connectors, MCPs, and virtually anything that can be executed through the shell.
- Availability
- Research preview in the Claude Desktop app (separate tab) for Max subscribers ($100 / $200 per‑month plans) on macOS.
- Windows support is planned for the future.
How It Works
- Claude CoWork is given access to a folder on your computer.
- Inside a local containerised environment, it mounts the folder, allowing it to read, edit, or create files only in locations you have granted permission to.
“You can trust that it won’t access folders you haven’t explicitly allowed.”
There’s a lot more to say about CoWork, but that will be saved for a separate blog post. Below we focus on using connectors and MCPs to do more than just organise files.
Quick Shortcut: Rube.app
If you don’t want to spend time configuring everything, just use rube.app inside Claude Code.
- Instant access to 900+ SaaS apps (Gmail, GitHub, BitBucket, etc.)
- Zero OAuth and key‑management hassle
- Dynamic tool loading → reduced token usage & better execution
- Create reusable workflows and expose them as tools
Try Rube now for FREE!
Working with MCP Connectors
What Are Claude AI Connectors?
Claude AI Connectors are direct integrations that let Claude access your actual work tools and data. Launched in July 2025, they turn Claude from a “knowledge‑rich” AI into an AI that knows a lot about your world.
- Pre‑built integrations: Gmail, Google Drive, GitHub, Google Calendar.
- Additional MCP servers (local & remote): HubSpot, Snowflake, Figma, Context7, etc.
Enabling a Connector
- Open Settings → Connectors.
- Find the integration you want to enable.
- Click Connect.
- Follow the authentication flow.
Note: Pro, Max, Team, and Enterprise users can add these connectors to Claude or Claude Desktop.
MCP Marketplace
Anthropic hosts an MCP marketplace where you can discover Anthropic‑reviewed tools (both local and remote‑hosted).
| Type | Navigation |
|---|---|
| Desktop / Local MCPs | Desktop → Search Your MCP → Click Install |
| Remote MCPs | Browse Connectors → Web tab → Search your MCPs |
| Custom MCP Server | Add a Custom Connector → Provide MCP name & Server URL → (Optional) OAuth credentials |
Custom MCP Servers – The Interesting Part
You can use any MCP server you prefer:
- Click Add a Custom Connector.
- Provide the MCP name and Server URL.
- (Optional) Add OAuth credentials.
Why MCPs Matter
MCP servers are a force multiplier, making it easy for LLMs to access data. However, they have physical limitations:
- Each MCP tool comes with a schema definition (name, parameters, examples).
- More detailed schemas → more reliable execution, but they also consume tokens (the model’s context window is limited, e.g., ~200 k tokens).
- Over‑loading the context with many tool definitions reduces space for actual reasoning.
Example
- GitHub MCP: 40 tools → 8.5 % of a 200 k token window (≈ 17.1 k tokens).
- Linear MCP: 27 tools → similar token cost.
Most MCP clients eagerly load all available tools into the model context, even if the model never calls many of them. This leads to:
- Higher token usage per request.
- Artificial splitting of MCP servers to stay within context limits.
- Reluctance to experiment with new tools (each addition degrades existing interactions).
The Real Failure Mode: Results, Not Schemas
Even with perfect schemas, the results returned (logs, DB rows, file lists, JSON blobs, etc.) can flood the model’s context. A single careless response can erase half the conversation history, jeopardising the LLM’s performance.
As the number of MCP tools grows, tool‑selection accuracy drops:
- The model may call a near‑match instead of the correct tool.
- Over‑use of generic tools.
- Avoid tools altogether and hallucinate answers.
The root cause is the finite attention budget: the model cannot fully read an ever‑growing list of tool definitions.
Architectural Improvements
1. Load‑On‑Demand Tool Definitions
Instead of loading every tool schema up‑front, load only the tools needed for the current task. This turns the “always‑on” token cost into a “pay‑only‑when‑used” cost, freeing up space for reasoning and improving reliability.
Implementation: Rube – a universal MCP server that dynamically loads tools based on task context.
- Planner tool – creates a detailed plan for a task.
- Search tool – finds and retrieves the required tool definitions.
When the model needs a tool, it requests the specific definition, and only then is that schema injected into the context.
2. Searchable Tool Catalogue
Don’t rely on the model to scan a long list of tools. Provide a searchable index (e.g., a vector database with hybrid search) that the model can query.
Catalogue entry format
- Tool name
- One‑line purpose
- Key parameters
- Few example queries
The model searches this catalogue with natural language, retrieves the most relevant tool(s), and then loads the exact schema needed.
Summary
- Claude CoWork gives you powerful, containerised access to your local system while keeping permissions explicit.
- Rube.app offers a hassle‑free way to tap into 900+ SaaS apps.
- MCP connectors bridge Claude with your everyday work tools; the marketplace and custom connectors let you extend functionality.
- Token efficiency is critical: load tool definitions on demand and use a searchable catalogue to keep the model’s attention focused.
By adopting these patterns, you can scale the number of MCP tools without sacrificing reliability, experimentation, or performance.
Overview
- Goal: Return the top 3‑5 matches, then load only those schemas.
- Why:
- Reduces token usage.
- Prevents the model from forgetting earlier goals when large payloads are pasted back into the prompt.
Recommended Pattern
- Store large outputs outside the prompt (e.g., local file, object store, database, temporary cache).
- Return a small summary plus a handle (file path, ID, cursor, pointer).
Key Insight: LLMs excel at file operations. Let the model retrieve only the data it needs instead of forcing it to read massive JSON blobs.
Tool‑Calling vs. Programmatic Tool‑Calling
| Approach | How it works | Token impact |
|---|---|---|
| Traditional tool calling | Model calls a tool → receives result → reads result → decides next call (repeat) | Every intermediate result is added to the chat, consuming tokens. |
| Programmatic tool‑calling | Model writes a short code snippet (in a code‑execution container) that calls tools as functions, loops, branches, aggregates, and returns a final summary. | Only the final output enters Claude’s context → far fewer tokens. |
When this pattern shines
- Large datasets where only aggregates or summaries are needed.
- Multi‑step workflows with ≥ 3 dependent tool calls.
- Filtering, sorting, or transforming tool results before Claude sees them.
- Parallel operations across many items (e.g., checking 50 things).
- Tasks where intermediate data should not influence reasoning.
Note: Adding code execution adds a small overhead, so for a single quick lookup the traditional approach may still be faster.
Rube MCP – Your All‑In‑One Wrapper
Rube provides a meta‑tool layer over 877 SaaS toolkits, handling authentication, connection management, and bulk execution.
Core Discovery & Connection Tools
| Tool | Purpose |
|---|---|
RUBE_SEARCH_TOOLS | Finds relevant tools and generates execution plans. Always call first. Returns tools, schemas, connection status, and recommended steps. |
RUBE_GET_TOOL_SCHEMAS | Retrieves full input‑parameter schemas when SEARCH_TOOLS returns a schemaRef. |
RUBE_MANAGE_CONNECTIONS | Creates or updates connections to user apps (OAuth, API keys, etc.). Never execute a tool without an active connection. |
Execution & Processing Tools
| Tool | Purpose |
|---|---|
RUBE_MULTI_EXECUTE_TOOL | Fast parallel executor for up to 50 tools across apps. Includes in‑memory storage for persistent facts across executions. |
RUBE_REMOTE_WORKBENCH | Executes Python code in a remote Jupyter sandbox (4‑minute timeout). Ideal for processing large files, bulk operations, or complex tool chains. |
RUBE_REMOTE_BASH_TOOL | Executes Bash commands in a remote sandbox. Great for file ops and JSON processing with jq, awk, sed, etc. |
Recipe (Reusable Workflow) Tools
| Tool | Purpose |
|---|---|
RUBE_CREATE_UPDATE_RECIPE | Converts completed workflows into reusable notebooks/recipes with defined inputs, outputs, and executable code. |
RUBE_EXECUTE_RECIPE | Runs an existing recipe with supplied parameters. |
RUBE_FIND_RECIPE | Searches recipes via natural language (e.g., “GitHub PRs to Slack”). Returns matching IDs for execution. |
RUBE_GET_RECIPE_DETAILS | Retrieves full recipe details (code, schema, defaults). |
RUBE_MANAGE_RECIPE_SCHEDULE | Creates, updates, pauses, or deletes recurring schedules using cron expressions. |
Typical Workflow
- Discover tools –
RUBE_SEARCH_TOOLS→ identify needed capabilities. - Ensure connections –
RUBE_MANAGE_CONNECTIONS. - Execute –
RUBE_MULTI_EXECUTE_TOOL(orRUBE_REMOTE_WORKBENCHfor heavy data). - (Optional) Save –
RUBE_CREATE_UPDATE_RECIPEfor reuse.
The process mirrors any Remote MCP server integration.
Getting Started with Rube
- Visit the Rube site and click “Use Rube.”
- Copy the MCP URL (e.g.,
https://rube.app/mcp). - Open your Claude app → go to Connectors → paste the MCP URL.
You’re all set! Ask Claude anything; you’ll be prompted to authenticate the required apps, then let Claude handle the rest.
Demo
- YouTube walkthrough: (link omitted in source)
TL;DR
- Return only the top 3‑5 matches and load their schemas.
- Store large payloads externally and return a tiny handle.
- Prefer programmatic tool‑calling (write short code that loops/filters) to avoid token bloat.
- Rube MCP gives you discovery, connection, parallel execution, and recipe management for 877 SaaS tools—all with minimal friction.
Happy building! 🚀