How to better your Claude CoWork experience with MCPs

Published: (January 19, 2026 at 08:08 AM EST)
7 min read
Source: Dev.to

Source: Dev.to

Claude CoWork Overview

When everyone was busy talking about how good Claude Code is, Anthropic launched Claude CoWork – essentially Claude Code with a much less intimidating interface for automating “fake‑email” jobs.

  • Capabilities
    • Access to your local file system, connectors, MCPs, and virtually anything that can be executed through the shell.
  • Availability
    • Research preview in the Claude Desktop app (separate tab) for Max subscribers ($100 / $200 per‑month plans) on macOS.
    • Windows support is planned for the future.

How It Works

  1. Claude CoWork is given access to a folder on your computer.
  2. Inside a local containerised environment, it mounts the folder, allowing it to read, edit, or create files only in locations you have granted permission to.

“You can trust that it won’t access folders you haven’t explicitly allowed.”

There’s a lot more to say about CoWork, but that will be saved for a separate blog post. Below we focus on using connectors and MCPs to do more than just organise files.

Quick Shortcut: Rube.app

If you don’t want to spend time configuring everything, just use rube.app inside Claude Code.

  • Instant access to 900+ SaaS apps (Gmail, GitHub, BitBucket, etc.)
  • Zero OAuth and key‑management hassle
  • Dynamic tool loading → reduced token usage & better execution
  • Create reusable workflows and expose them as tools

Try Rube now for FREE!

Working with MCP Connectors

What Are Claude AI Connectors?

Claude AI Connectors are direct integrations that let Claude access your actual work tools and data. Launched in July 2025, they turn Claude from a “knowledge‑rich” AI into an AI that knows a lot about your world.

  • Pre‑built integrations: Gmail, Google Drive, GitHub, Google Calendar.
  • Additional MCP servers (local & remote): HubSpot, Snowflake, Figma, Context7, etc.

Enabling a Connector

  1. Open Settings → Connectors.
  2. Find the integration you want to enable.
  3. Click Connect.
  4. Follow the authentication flow.

Note: Pro, Max, Team, and Enterprise users can add these connectors to Claude or Claude Desktop.

MCP Marketplace

Anthropic hosts an MCP marketplace where you can discover Anthropic‑reviewed tools (both local and remote‑hosted).

TypeNavigation
Desktop / Local MCPsDesktop → Search Your MCP → Click Install
Remote MCPsBrowse Connectors → Web tab → Search your MCPs
Custom MCP ServerAdd a Custom Connector → Provide MCP name & Server URL → (Optional) OAuth credentials

Custom MCP Servers – The Interesting Part

You can use any MCP server you prefer:

  1. Click Add a Custom Connector.
  2. Provide the MCP name and Server URL.
  3. (Optional) Add OAuth credentials.

Why MCPs Matter

MCP servers are a force multiplier, making it easy for LLMs to access data. However, they have physical limitations:

  • Each MCP tool comes with a schema definition (name, parameters, examples).
  • More detailed schemas → more reliable execution, but they also consume tokens (the model’s context window is limited, e.g., ~200 k tokens).
  • Over‑loading the context with many tool definitions reduces space for actual reasoning.
Example
  • GitHub MCP: 40 tools → 8.5 % of a 200 k token window (≈ 17.1 k tokens).
  • Linear MCP: 27 tools → similar token cost.

Most MCP clients eagerly load all available tools into the model context, even if the model never calls many of them. This leads to:

  • Higher token usage per request.
  • Artificial splitting of MCP servers to stay within context limits.
  • Reluctance to experiment with new tools (each addition degrades existing interactions).
The Real Failure Mode: Results, Not Schemas

Even with perfect schemas, the results returned (logs, DB rows, file lists, JSON blobs, etc.) can flood the model’s context. A single careless response can erase half the conversation history, jeopardising the LLM’s performance.

As the number of MCP tools grows, tool‑selection accuracy drops:

  • The model may call a near‑match instead of the correct tool.
  • Over‑use of generic tools.
  • Avoid tools altogether and hallucinate answers.

The root cause is the finite attention budget: the model cannot fully read an ever‑growing list of tool definitions.

Architectural Improvements

1. Load‑On‑Demand Tool Definitions

Instead of loading every tool schema up‑front, load only the tools needed for the current task. This turns the “always‑on” token cost into a “pay‑only‑when‑used” cost, freeing up space for reasoning and improving reliability.

Implementation: Rube – a universal MCP server that dynamically loads tools based on task context.

  • Planner tool – creates a detailed plan for a task.
  • Search tool – finds and retrieves the required tool definitions.

When the model needs a tool, it requests the specific definition, and only then is that schema injected into the context.

2. Searchable Tool Catalogue

Don’t rely on the model to scan a long list of tools. Provide a searchable index (e.g., a vector database with hybrid search) that the model can query.

Catalogue entry format

  • Tool name
  • One‑line purpose
  • Key parameters
  • Few example queries

The model searches this catalogue with natural language, retrieves the most relevant tool(s), and then loads the exact schema needed.

Summary

  • Claude CoWork gives you powerful, containerised access to your local system while keeping permissions explicit.
  • Rube.app offers a hassle‑free way to tap into 900+ SaaS apps.
  • MCP connectors bridge Claude with your everyday work tools; the marketplace and custom connectors let you extend functionality.
  • Token efficiency is critical: load tool definitions on demand and use a searchable catalogue to keep the model’s attention focused.

By adopting these patterns, you can scale the number of MCP tools without sacrificing reliability, experimentation, or performance.

Overview

  • Goal: Return the top 3‑5 matches, then load only those schemas.
  • Why:
    • Reduces token usage.
    • Prevents the model from forgetting earlier goals when large payloads are pasted back into the prompt.
  1. Store large outputs outside the prompt (e.g., local file, object store, database, temporary cache).
  2. Return a small summary plus a handle (file path, ID, cursor, pointer).

Key Insight: LLMs excel at file operations. Let the model retrieve only the data it needs instead of forcing it to read massive JSON blobs.

Tool‑Calling vs. Programmatic Tool‑Calling

ApproachHow it worksToken impact
Traditional tool callingModel calls a tool → receives result → reads result → decides next call (repeat)Every intermediate result is added to the chat, consuming tokens.
Programmatic tool‑callingModel writes a short code snippet (in a code‑execution container) that calls tools as functions, loops, branches, aggregates, and returns a final summary.Only the final output enters Claude’s context → far fewer tokens.

When this pattern shines

  • Large datasets where only aggregates or summaries are needed.
  • Multi‑step workflows with ≥ 3 dependent tool calls.
  • Filtering, sorting, or transforming tool results before Claude sees them.
  • Parallel operations across many items (e.g., checking 50 things).
  • Tasks where intermediate data should not influence reasoning.

Note: Adding code execution adds a small overhead, so for a single quick lookup the traditional approach may still be faster.

Rube MCP – Your All‑In‑One Wrapper

Rube provides a meta‑tool layer over 877 SaaS toolkits, handling authentication, connection management, and bulk execution.

Core Discovery & Connection Tools

ToolPurpose
RUBE_SEARCH_TOOLSFinds relevant tools and generates execution plans. Always call first. Returns tools, schemas, connection status, and recommended steps.
RUBE_GET_TOOL_SCHEMASRetrieves full input‑parameter schemas when SEARCH_TOOLS returns a schemaRef.
RUBE_MANAGE_CONNECTIONSCreates or updates connections to user apps (OAuth, API keys, etc.). Never execute a tool without an active connection.

Execution & Processing Tools

ToolPurpose
RUBE_MULTI_EXECUTE_TOOLFast parallel executor for up to 50 tools across apps. Includes in‑memory storage for persistent facts across executions.
RUBE_REMOTE_WORKBENCHExecutes Python code in a remote Jupyter sandbox (4‑minute timeout). Ideal for processing large files, bulk operations, or complex tool chains.
RUBE_REMOTE_BASH_TOOLExecutes Bash commands in a remote sandbox. Great for file ops and JSON processing with jq, awk, sed, etc.

Recipe (Reusable Workflow) Tools

ToolPurpose
RUBE_CREATE_UPDATE_RECIPEConverts completed workflows into reusable notebooks/recipes with defined inputs, outputs, and executable code.
RUBE_EXECUTE_RECIPERuns an existing recipe with supplied parameters.
RUBE_FIND_RECIPESearches recipes via natural language (e.g., “GitHub PRs to Slack”). Returns matching IDs for execution.
RUBE_GET_RECIPE_DETAILSRetrieves full recipe details (code, schema, defaults).
RUBE_MANAGE_RECIPE_SCHEDULECreates, updates, pauses, or deletes recurring schedules using cron expressions.

Typical Workflow

  1. Discover toolsRUBE_SEARCH_TOOLS → identify needed capabilities.
  2. Ensure connectionsRUBE_MANAGE_CONNECTIONS.
  3. ExecuteRUBE_MULTI_EXECUTE_TOOL (or RUBE_REMOTE_WORKBENCH for heavy data).
  4. (Optional) SaveRUBE_CREATE_UPDATE_RECIPE for reuse.

The process mirrors any Remote MCP server integration.

Getting Started with Rube

  1. Visit the Rube site and click “Use Rube.”
  2. Copy the MCP URL (e.g., https://rube.app/mcp).
  3. Open your Claude app → go to Connectors → paste the MCP URL.

You’re all set! Ask Claude anything; you’ll be prompted to authenticate the required apps, then let Claude handle the rest.

Demo

  • YouTube walkthrough: (link omitted in source)

TL;DR

  • Return only the top 3‑5 matches and load their schemas.
  • Store large payloads externally and return a tiny handle.
  • Prefer programmatic tool‑calling (write short code that loops/filters) to avoid token bloat.
  • Rube MCP gives you discovery, connection, parallel execution, and recipe management for 877 SaaS tools—all with minimal friction.

Happy building! 🚀

Back to Blog

Related posts

Read more »