Skip to content Skip to footer

Model Context Protocol (MCP)

The Open Standard Powering AI Agents and the Tools They Use

Until recently, AI assistants could only talk. They could explain how to send a Slack message, write the code to query a database, or describe how to mint an NFT. But they could not actually do any of it. MCP changes that. It is the open standard that lets an AI reach out and use real tools: read files, query databases, send messages, swap tokens, deploy contracts. This lesson starts from zero and explains what MCP is, how it works, why the entire AI industry adopted it in under a year, and why it matters deeply for Web3.

The Problem MCP Solves: The M Times N Explosion

Before MCP, every AI application had to write a custom integration for every tool it wanted to use. OpenAI had its own function-calling format. Anthropic had its own tool-use format. Both were proprietary and mutually incompatible. If you had M AI applications (Claude, ChatGPT, Cursor, Gemini, your custom agent) and N tools (Slack, GitHub, Postgres, Google Drive, Notion), you needed M times N custom connectors. Ten apps and a hundred tools meant a thousand bespoke integrations, each one built separately and rotting independently.

MCP collapses this to M plus N: every AI application implements the MCP client once, every tool ships an MCP server once, and everything composes automatically. The pattern is borrowed directly from the Language Server Protocol (LSP), which Microsoft introduced in 2016 to solve the exact same explosion for code editors and programming languages. Before LSP, every IDE and every language combination needed its own bespoke code. Within five years, the IDE wars were effectively over. MCP is making the same bet for AI tools.

Quick comparison: ERC-20 was not a brilliant feature. It was a brilliant standard. Any token that implemented ERC-20 was automatically compatible with every wallet, every DEX, and every DeFi protocol. MCP works the same way. Any AI that implements MCP can use any MCP server. Standards create network effects that individual products never can.

How MCP Actually Works

MCP is a client-server protocol built on top of JSON-RPC 2.0, which is a simple standard where requests and responses are plain JSON objects with a method name and parameters. Three roles do all the work.

Hosts

Hosts are the AI applications you actually open: Claude Desktop, ChatGPT, Cursor, VS Code with Copilot. The host contains the LLM, manages the user interface, decides which servers are allowed to connect, and shows you approval prompts before any tool takes action.

Clients

Clients are connection managers that live inside the host, one per server. They route messages, manage subscriptions, and keep each server’s data isolated from the others. They are the plumbing inside the host that you never see directly.

Servers

Servers are external programs that expose capabilities. Each server typically wraps one product or domain: a GitHub server, a Postgres server, a Stripe server. Servers can run locally (launched by the host as a subprocess, communicating over standard input and output) or remotely over HTTPS, which is the modern approach that most enterprise servers now use.

RoleWhat It IsExamplesWho Controls It
HostThe AI app the user opensClaude Desktop, Cursor, ChatGPTAI lab or IDE vendor
ClientConnection manager inside the hostBuilt into the host; invisible to usersHost developer
ServerExternal program exposing toolsGitHub MCP, Stripe MCP, Base MCPTool vendor or developer

Table 1: The three roles in every MCP interaction

The Three Things a Server Can Offer

Tools

Tools are functions the AI can call that do thingssendSlackMessagecreateGithubIssueexecuteSwapmintNFT. Each tool has a name, a plain-English description the model reads, and a schema that defines what inputs it accepts. Tool calls have side effects in the real world, so well-behaved hosts show you an approval prompt before running them.

Resources

Resources are read-only data the AI can fetch: a Google Drive document, a database row, a log file, a blockchain balance. Resources return information without changing anything, and a client can subscribe to them for live updates when the underlying data changes.

Prompts

Prompts are reusable templates users can trigger from the host interface, often as slash commands like /summarize-pr or /code-review. They produce a pre-filled conversation that gets sent to the LLM, saving users from typing the same instructions repeatedly.

Three newer capabilities flow in the opposite direction, from server back to client. Sampling lets a server ask the host’s own LLM to perform an inference, so the server does not need its own API key. Roots let the client tell a server which folders or workspaces are in scope. Elicitation, added in mid-2025, lets a server pause mid-task and ask the user a structured question through the host UI: for example, a travel booking server realizing it needs your meal preferences before completing a reservation.

What MCP Servers Exist Today

The ecosystem grew from a handful of reference servers at launch in November 2024 to roughly 8,000 to 10,000 public servers and around 97 million monthly SDK downloads by early 2026. The quality concentrates in official, vendor-maintained servers.

GitHub’s MCP server exposes over 100 tools across 19 categories: read code, open pull requests, run Actions workflows, fetch failed job logs, query Dependabot alerts. Stripe’s server lets agents create invoices, issue refunds, and manage subscriptions. Cloudflare exposes its entire API through just two tools using a compression technique that cuts the AI’s token usage by up to 99.9%. Official servers also exist for Linear, Notion, Atlassian, Asana, PayPal, Intercom, Sentry, HubSpot, Figma, Salesforce, and Supabase, among many others.

For databases there are servers covering Postgres, SQLite, Redis, and hosted options like Neon. For browsing there are servers built on Playwright and Puppeteer that give the AI a real browser. For knowledge retrieval, Context7 fetches live, version-specific documentation from official library sources so the AI always has current API references.

What composition looks like in practice: An agent in Cursor can read a Notion product spec, scaffold a working app, deploy it to Cloudflare Workers, configure DNS, open a pull request on GitHub, file the work item in Linear, and ping the team in Slack, all from one prompt, with the user approving sensitive steps along the way. None of this was possible eighteen months ago.

Eight Ways to Picture MCP

Because MCP is infrastructure, analogies help more than definitions. Each of the following captures a different true thing about it.

USB-C for AI

Anthropic’s own framing. Before USB-C, every device had its own cable. MCP is the universal port between AI applications and the tools they use.

HTTP for the Agentic Web

Microsoft CTO Kevin Scott’s framing. Just as HTTP made any browser talk to any web server in the 1990s, MCP makes any AI client talk to any tool server.

Language Server Protocol for AI

The precise technical inspiration Anthropic cites. Same M+N collapse, same architecture, same network effects that ended the IDE wars.

Universal Remote Control

Captures the action aspect. MCP does not just let the AI read things. It lets the AI press buttons on Slack, GitHub, Figma, and your DeFi protocol.

UN Interpreter Booth

Each MCP server is a translator: speaks MCP on one side, speaks the proprietary Slack or Etherscan API on the other.

Standardized Power Outlet

Any appliance plugs into any wall. But you still need circuit breakers (permissions) and grounding (authentication). The analogy carries the safety story too.

App Store for Agents

The official MCP Registry that launched in September 2025 is essentially a package manager for tools. Agents browse, install, and compose capabilities like apps.

ERC-20 for AI Tools

For a Web3 audience, this lands hardest. Just as ERC-20 made every token automatically interoperable with every wallet and DEX, MCP makes every tool interoperable with every agent.

From One Company to the Whole Industry in Fifteen Months

Anthropic released MCP on November 25, 2024 as an open-source specification. The adoption curve that followed was unusually fast for an infrastructure standard.

OpenAI adopted it on March 26, 2025. Google DeepMind followed on April 9, 2025, with Demis Hassabis describing it as “rapidly becoming an open standard for the AI agentic era.” Microsoft made it a first-class citizen in Copilot Studio, Windows 11, and GitHub Copilot at Build 2025. Salesforce anchored Agentforce 3 around it in June 2025.

On December 9, 2025, Anthropic donated MCP to the new Agentic AI Foundation under the Linux Foundation, co-founded with OpenAI and Block. AWS, Google, Microsoft, Cloudflare, GitHub, and Bloomberg joined as platinum members. The protocol is no longer a single company’s project; it is shared industry infrastructure.

DateMilestone
November 25, 2024Anthropic releases MCP as open-source specification
March 26, 2025OpenAI announces MCP adoption across its products
April 9, 2025Google DeepMind announces MCP support for Gemini and its SDK
May 2025Microsoft makes MCP first-class in Copilot Studio and GitHub Copilot at Build 2025
September 2025Official MCP Registry launches; Oasis ships first confidential MCP server
December 9, 2025Anthropic donates MCP to the Agentic AI Foundation under the Linux Foundation
Early 2026Roughly 8,000 to 10,000 public servers; ~97 million monthly SDK downloads

Table 2: Key milestones in MCP adoption

The Security Problem Nobody Talks About Enough

MCP’s openness creates a meaningful attack surface that the security community began mapping in earnest through 2025.

The core vulnerability is called tool poisoning. Each tool has a description that the LLM reads but that the user typically never sees in the approval interface. A malicious server can hide instructions inside that description, telling the model to perform actions the user never intended. A proof-of-concept published in April 2025 disguised a data-exfiltration tool as a simple add(a, b) function whose hidden description told the model to first read the user’s SSH keys and pass them along as a “sidenote” parameter. The user sees an arithmetic operation. The attacker receives the keys.

A scan of roughly 1,900 public servers found that about 5.5% contained tool-poisoning patterns. A separate benchmark across 45 real servers found attack success rates above 80% when hosts had auto-approval enabled.

The rug-pull problem: A tool you approved today can silently change its behavior tomorrow. The MCP specification allows servers to notify the host that their tool list has changed, and there is no built-in mechanism requiring you to re-approve updated tool definitions. An attacker who controls a server you already trust can introduce new behavior after gaining your initial trust.

Real-world exploits followed quickly. In September 2025, a rogue package called postmark-mcp was discovered to be silently blind-copying every outgoing email to the attacker’s address. Roughly 300 organizations were affected before the package was removed. A critical remote-code-execution vulnerability in mcp-remote, the library many desktop clients use for OAuth connections, affected over 437,000 downloads before it was patched.

The defensive response has matured substantially. The spec now requires OAuth 2.1 with mandatory PKCE for remote authentication, added resource indicators to prevent confused-deputy attacks where one server tricks the AI into misusing credentials from another, and added an OpenID Connect discovery mechanism for proper server identity. Community tools like mcp-scan check tool descriptions for poisoning patterns. The official MCP specification states that there should always be a human in the loop with the ability to deny tool invocations. Treating that recommendation as a strict requirement is currently the most important single step any user can take.

MCP and Web3: The Native Fit

The crypto-native MCP ecosystem has grown quickly. Coinbase’s Base MCP lets agents check wallet balances, transfer funds, deploy contracts, mint NFTs, manage yield vaults, and post to Farcaster. Solana’s official MCP server covers RPC reads, SPL transfers, Jupiter swaps, and Anchor framework tooling. Chainstack exposes a multi-chain RPC server covering Ethereum, Solana, Polygon, Base, Arbitrum, Bitcoin, and Oasis Sapphire. Chainlink exposes its decentralized price feeds. Heurist Mesh packages over 30 specialized Web3 tools (CoinGecko, DexScreener, Etherscan, GoPlus token security, Zerion wallet analysis) behind a single MCP endpoint.

A particularly powerful pattern is the ABI-to-MCP approach: tools now exist that turn any verified smart contract ABI into an MCP server in a single command. Every smart contract on every EVM chain is potentially one step away from being a tool any AI agent can use directly.

The institutional framing is sharpening too. a16z’s 2025 research explicitly predicted “Decentralized Autonomous Chatbots” running in TEEs, and their 2026 trends introduced “Know Your Agent” as the next identity primitive. In financial services, non-human AI agents already outnumber human employees by 96 to 1. ERC-8004, the Ethereum Improvement Proposal for trustless agents, now includes an mcp field in its registration schema pointing to the agent’s MCP endpoint, weaving the two standards together at the specification level. ElizaOS, Virtuals Protocol, Olas, and Fetch.ai are all building MCP interfaces into their agent frameworks.

Why Oasis Matters: Confidential MCP

Here is the privacy problem that most MCP discussions skip. Every standard MCP tool call is a potential leak. The default configuration for Coinbase’s Base MCP asks you to paste your seed phrase into a plain-text config file. Every API key, every wallet credential, every prompt your agent sends, every result it receives, and the trading strategy it is executing are all visible to whoever operates the MCP server. For an agent managing a real wallet, this is a serious problem. For an agent operating in a DAO treasury, it is disqualifying. Anyone who can see pending swap intents before they land on-chain can front-run them trivially.

Oasis Network shipped the first concrete answer to this in September 2025, in partnership with Heurist. The solution uses two components of the Oasis stack working together.

ROFL (Runtime Off-chain Logic) runs containerized apps inside hardware-isolated Trusted Execution Environments (TEEs) using Intel SGX and Intel TDX. Think of a TEE as a tamper-proof black box inside the server’s CPU: data enters encrypted, computation happens inside where even the machine’s owner cannot read it, and results leave encrypted. ROFL went mainnet in July 2025.

Sapphire, the first production-ready confidential EVM, stores attestation proofs on-chain so that any external party can cryptographically verify that a given MCP server is genuinely running inside a TEE and has not been tampered with.

A confidential MCP server is a Docker container deployed on a permissionless TEE pool with a single deploy command. The container generates a hardware attestation, posts it to Sapphire, and any AI client connects to its standard HTTPS endpoint. Inputs, execution, and outputs are sealed inside the enclave. The node operator cannot read them. Crucially, the server generates its own signing keys inside the enclave, so not even the developer who wrote the server ever has access to them.

This is what makes Oasis-based agents genuinely autonomous rather than nominally so. The WT3 trading agent runs on Sapphire and Hyperliquid with keys that no human controls. The Talos treasury management agent manages the Arbitrum Foundation’s treasury with verifiable confidentiality. Both are live, not prototypes.

Oasis also co-authored ERC-8004, the trustless agent standard, and the official reference implementation wires ROFL containers to on-chain agent identities with MCP support, NFT-based ownership, and micropayment integration built in. The simplest framing: MCP is the USB-C of AI tools; Oasis ROFL is the secure hub that cannot be wiretapped.

Privacy Features Across the MCP Stack

FeatureStandard MCP ServerConfidential MCP (Oasis ROFL)
Tool inputs visible to server operatorYesNo (sealed in TEE)
Prompts and strategy visibleYesNo
API keys / credentials at riskYes (config file)No (generated inside enclave)
Verifiable execution integrityNoYes (on-chain attestation via Sapphire)
Front-running resistanceNoYes
Compatible with standard MCP clientsYesYes (same HTTPS interface)
Agent key controlHuman-held keysEnclave-generated, no human access

Table 3: Standard vs. confidential MCP across privacy-relevant dimensions

Where MCP Is Heading

The official 2026 roadmap prioritizes several areas. On the technical side: better scalability for Streamable HTTP with stateless sessions and horizontal scaling, a well-known discovery mechanism so AI apps can find MCP servers automatically, and stronger multi-agent coordination patterns so fleets of specialized agents can compose with each other cleanly.

On the governance side: the Agentic AI Foundation under the Linux Foundation is taking over stewardship, with the goal of making the spec a genuinely neutral public good, not a single company’s roadmap.

Several adjacent protocols are settling into a complementary stack rather than competing. A2A (Agent2Agent), donated to the Linux Foundation by Google, handles communication between agents. The official framing is that MCP connects agents to tools while A2A connects agents to other agents. AGENTS.md handles repository-level instructions for coding agents. x402 handles machine-native micropayments so agents can pay for services autonomously. Together these form a layered infrastructure: identity, tools, coordination, and payments, all composable.

Conclusion: The Standard That Quietly Changed Everything

The story of MCP is the story of the right standard arriving at exactly the right moment. AI models became capable enough to act as agents. The integration problem was about to multiply combinatorially across hundreds of AI apps and thousands of tools. Anthropic published a clean open specification, kept it open, donated it to neutral governance, and within fifteen months every competitor had adopted it. The pattern is familiar to anyone who has watched standards beat features in tech: HTTP over proprietary protocols, USB over custom cables, ERC-20 over custom token logic.

For Web3 specifically, the convergence is sharper than it appears. AI agents that hold private keys, manage DAO treasuries, and act as autonomous market participants need three things the crypto ecosystem has spent a decade building: verifiable identity, machine-native payments, and confidential execution. MCP is the fourth piece: the standardized way agents reach out and do real work in the world. The most interesting builders are no longer choosing between AI and Web3. They are stacking the layers.

The frontier question is no longer whether MCP wins. That question is settled. The question is how privately and how verifiably it operates. The default architecture leaks. The confidential architecture demonstrated on Oasis in September 2025 does not. Which of these becomes the standard for agents handling real value is the chapter being written right now.


Transparency Note: The video introduction to this lesson was generated using NotebookLM. We’ve included this AI-synthesized summary to offer a visual and conversational way to grasp the core concepts. However, for the specific technical details please rely on the written lesson above.