MCP joins the Linux Foundation: What this means for developers building the next era of AI tools and agents
Source: GitHub Blog
Over the past year, AI development has exploded. More than 1.1 million public GitHub repositories now import an LLM SDK (+178 % YoY), and developers created nearly 700 000 new AI repositories, according to this year’s Octoverse report. Agentic tools like vllm, ollama, continue, aider, ragflow, and cline are quickly becoming part of the modern developer stack.
As this ecosystem expands, we’ve seen a growing need to connect models to external tools and systems—securely, consistently, and across platforms. That’s the gap the Model Context Protocol (MCP) has rapidly filled.
Born as an open‑source idea inside Anthropic, MCP grew quickly because it was open from the very beginning and designed for the community to extend, adopt, and shape together. That openness is a core reason it became one of the fastest‑growing standards in the industry. It also allowed companies like GitHub and Microsoft to join in and help build out the standard.
Now, Anthropic is donating MCP to the Agentic AI Foundation, which will be managed by the Linux Foundation, and the protocol is entering a new phase of shared stewardship. This will provide developers with a foundation for long‑term tooling, production agents, and enterprise systems. This is exciting for those of us who have been involved in the MCP community, and given our long‑term support of the Linux Foundation, we are hugely supportive of this move.
The past year has seen incredible growth and change for MCP. I thought it would be great to review how MCP got here, and what its transition to the Linux Foundation means for the next wave of AI development.
Before MCP: Fragmented APIs and brittle integrations
LLMs started as isolated systems. You sent them prompts and got responses back. We would use patterns like retrieval‑augmented generation (RAG) to help us bring in data to give more context to the LLM, but that was limited. OpenAI’s introduction of function calling brought about a huge change as, for the first time, you could call any external function. This is what we initially built on top of as part of GitHub Copilot.
By early 2023, developers were connecting LLMs to external systems through a patchwork of incompatible APIs: bespoke extensions, IDE plugins, and platform‑specific agent frameworks, among other things. Every provider had its own integration story, and none of them worked in exactly the same way.
“All the platforms had their own attempts like function calling, plugin APIs, extensions, but they just didn’t get much traction.” – Nick Cooper, OpenAI engineer and MCP steering committee member
This wasn’t a tooling problem. It was an architecture problem.
Connecting a model to the realtime web, a database, ticketing system, search index, or CI pipeline required bespoke code that often broke with the next model update. Developers had to write deep integration glue one platform at a time.
“The industry was running headfirst into an n×m integration problem with too many clients, too many systems, and no shared protocol to connect them.” – David Soria Parra, senior engineer at Anthropic and original MCP architect
In practical terms, the n×m integration problem describes a world where every model client (n) must integrate separately with every tool, service, or system developers rely on (m). Five AI clients talking to ten internal systems would result in fifty bespoke integrations—each with different semantics, authentication flows, and failure modes. MCP collapses this by defining a single, vendor‑neutral protocol that both clients and tools can speak.
The absence of a standard wasn’t just inefficient; it slowed real‑world adoption. In regulated industries like finance, healthcare, and security, developers needed secure, auditable, cross‑platform ways to let models communicate with systems. What they got instead were proprietary plugin ecosystems with unclear trust boundaries.
MCP: A protocol built for how developers work
Across the industry—including at Anthropic, GitHub, Microsoft, and others—engineers kept running into the same wall: reliably connecting models to context and tools. Inside Anthropic, teams noticed that their internal prototypes kept converging on similar patterns for requesting data, invoking tools, and handling long‑running tasks.
Soria Parra described MCP’s origin simply: it was a way to standardize patterns Anthropic engineers were reinventing. MCP distilled those patterns into a protocol designed around communication—how models and systems talk to each other, request context, and execute tools.
Anthropic’s Jerome Swanwick recalled an early internal hackathon where “every entry was built on MCP … went viral internally.”
That early developer traction became the seed. Once Anthropic released MCP publicly alongside high‑quality reference servers, the broader community understood its value immediately. MCP offered a shared way for models to communicate with external systems, regardless of client, runtime, or vendor.
Why MCP clicked: Built for real developer workflows
When MCP launched, adoption was immediate and unlike any standard seen before.
“It just clicked. I got the problem they were trying to solve; I got why this needs to exist.” – Den Delimarsky, principal engineer at Microsoft and core MCP steering committee member focused on security and OAuth
Within weeks, contributors from Anthropic, Microsoft, GitHub, OpenAI, and independent developers began expanding and hardening the protocol. Over the next nine months, the community added:
- OAuth flows for secure, remote servers
- Sampling semantics (ensuring consistent model behavior when tools are invoked or context is requested)
- Refined tool schemas
- Consistent server discovery patterns
- Expanded reference implementations
- Improved long‑running task support
Long‑running task APIs are a critical feature. They allow builds, indexing operations, deployments, and other multi‑minute jobs to be tracked predictably, without polling hacks or custom callback channels. This was essential for the long‑running AI agent workflows that we now see today.
Delimarsky’s OAuth work also became an inflection point. Prior to it, most MCP servers ran locally, which limited usage in enterprise environments and caused installation friction. OAuth enabled remote MCP servers, unlocking secure, compliant integrations at scale. This shift made MCP viable for multi‑machine orchestration, shared enterprise services, and non‑local infrastructure.
Just as importantly, OAuth gives MCP a familiar and proven security model with no proprietary tok