StackMCP
Blog
·10 min read

What Is MCP? The Model Context Protocol Explained for Developers

MCP (Model Context Protocol) connects AI coding assistants to external tools and data. Learn what it is, how it works, and why every major AI editor now supports it.

mcpguidesbeginners

The short version

MCP stands for Model Context Protocol. It is an open standard that lets AI coding assistants (Claude Code, Cursor, VS Code Copilot, Windsurf, and others) connect to external tools and data sources through a unified interface.

Without MCP, your AI assistant can only see the files open in your editor. With MCP, it can query your database, search the web, create GitHub issues, run browser tests, manage your cloud infrastructure, and much more — all from the same conversation.

Think of MCP as USB for AI tools. Before USB, every peripheral needed its own proprietary connector. MCP does the same for AI: one protocol, hundreds of tools, every editor.

Why MCP exists

Before MCP, connecting AI assistants to external tools required custom integrations. Each editor had its own plugin system. Each tool vendor built separate connectors for each editor. The result was fragmented, brittle, and slow to evolve.

Anthropic released MCP as an open protocol in November 2024. The goal was simple: define a standard way for AI models to discover and use tools, so that any tool works with any AI client.

The bet paid off. Within 14 months:

  • 97 million+ monthly SDK downloads across the ecosystem
  • 10,000+ MCP servers published on GitHub and npm
  • 300+ AI clients support the protocol
  • Microsoft, Google, Amazon, and OpenAI all adopted MCP
  • Governance transferred to the Linux Foundation (Agentic AI Foundation)

MCP is no longer an Anthropic experiment. It is the industry standard for AI tool integration.

How MCP works

MCP follows a client-server architecture:

┌─────────────────┐     MCP Protocol     ┌─────────────────┐
│   AI Editor     │ ←─────────────────→   │   MCP Server    │
│  (MCP Client)   │    JSON-RPC over      │  (Tool Provider) │
│                 │    stdio or HTTP       │                 │
│  Claude Code    │                       │  GitHub MCP     │
│  Cursor         │                       │  Postgres MCP   │
│  VS Code        │                       │  Playwright MCP │
│  Windsurf       │                       │  Stripe MCP     │
└─────────────────┘                       └─────────────────┘

MCP Client — Your AI editor. It discovers which tools are available and calls them when the AI model decides they are useful.

MCP Server — A small program that exposes tools. A GitHub MCP server exposes tools like "create issue", "list pull requests", "search code". A Postgres MCP server exposes "run query", "list tables", "describe schema".

Transport — How client and server communicate. Two options:

  • stdio — The server runs as a local process. Most common for development.
  • HTTP/SSE — The server runs remotely. Used for hosted services and team deployments.

When you configure an MCP server in your editor, the editor starts the server process and asks it: "What tools do you have?" The server responds with a list. From that point, the AI model can call any of those tools during your conversation.

What MCP servers can do

MCP servers expose three types of capabilities:

Tools

Functions the AI can call. Examples:

  • github_create_issue — Creates a GitHub issue
  • postgres_query — Runs a SQL query
  • playwright_navigate — Opens a URL in a browser

Resources

Data the AI can read. Examples:

  • Database schemas
  • File contents
  • API documentation

Prompts

Templates that guide the AI's behavior for specific tasks. Less common but useful for structured workflows.

Most MCP servers focus on tools. A typical server exposes 5-25 tools.

Real-world example

Say you are building a Next.js app with Supabase and you want AI assistance. Without MCP, you copy-paste database schemas into the chat, manually look up docs, and describe your GitHub issues in text.

With MCP servers configured:

  1. Supabase MCP — The AI reads your database schema directly, runs migrations, manages RLS policies
  2. GitHub MCP — The AI creates issues, reviews PRs, searches your codebase on GitHub
  3. Context7 MCP — The AI pulls up-to-date Next.js and Supabase documentation automatically
  4. Playwright MCP — The AI runs your app in a browser and checks if the UI works

You describe what you want in natural language. The AI figures out which tools to call, in what order, and does the work.

The token cost tradeoff

Every MCP server you add consumes part of your AI model's context window. When the editor asks a server "what tools do you have?", the tool descriptions get injected into the prompt.

A lightweight server like Context7 uses ~1,030 tokens (2 tools). A heavy server like Supabase MCP uses ~12,875 tokens (25 tools). Claude's context window is 200,000 tokens.

This matters. If you load too many servers, you burn context on tool descriptions instead of your actual code. The sweet spot for most developers is 4-6 servers totaling 15,000-30,000 tokens — about 8-15% of your context window.

This is why curated stacks exist. Instead of installing every server that looks useful, pick a stack that matches your workflow and stay within a reasonable token budget.

How to get started

Step 1: Pick your editor

MCP works with these AI editors (and more):

Editor Config File
Claude Code ~/.claude.json
Cursor .cursor/mcp.json
VS Code .vscode/settings.json
Windsurf ~/.codeium/windsurf/mcp_config.json
Claude Desktop ~/Library/Application Support/Claude/claude_desktop_config.json
Continue ~/.continue/config.yaml

Step 2: Choose your servers

Start small. For most developers, these three servers cover the basics:

  • Filesystem MCP — Read and write files beyond your project (free, no API key)
  • GitHub MCP — Manage repos, issues, and PRs (needs GitHub token)
  • Context7 MCP — Pull up-to-date library documentation (free, no API key)

Step 3: Add the config

For Claude Code, the fastest way is the CLI:

claude mcp add filesystem -- npx -y @modelcontextprotocol/server-filesystem /path/to/allowed/directory
claude mcp add github -e GITHUB_PERSONAL_ACCESS_TOKEN=your_token -- npx -y @modelcontextprotocol/server-github
claude mcp add context7 -- npx -y @upstash/context7-mcp

For other editors, you need a JSON config file. Use the StackMCP config generator to generate the right format for your editor automatically.

Step 4: Verify

Restart your editor. The MCP servers should appear in your tool list. Try asking your AI assistant: "What MCP tools do I have available?" It should list the tools from your configured servers.

Common questions

Do MCP servers have access to my code? Only the Filesystem MCP server can read files, and only within directories you explicitly allow. Other servers (GitHub, Postgres, etc.) access external services through their APIs, not your local files.

Are MCP servers secure? MCP servers run with the permissions you grant them. A server with network: true can make HTTP requests. A server with shell: true can run commands. Always review what permissions a server needs before installing it. Stick to official and well-maintained servers.

Do I need to pay for MCP servers? Most MCP servers are free and open source. However, some connect to paid APIs (Stripe, Datadog, etc.) where you pay the API provider, not the MCP server.

Can I use MCP with ChatGPT or other non-Anthropic models? Yes. MCP is an open standard. OpenAI added MCP support to ChatGPT and Codex. Google supports MCP in Gemini. The protocol is model-agnostic.

How many MCP servers should I use? Start with 3-5. Monitor your token usage. If your AI starts losing context or conversations feel "forgetful", you might have too many servers loaded. Focus on servers that directly support your daily workflow.

Next steps

Related Stacks

Related Servers