StackMCP
Blog
·7 min read

Best MCP Servers for AWS Developers in 2026

Connect your AI coding assistant to AWS services. The best MCP servers for S3, Lambda, DynamoDB, CloudFormation, and more.

mcpawscloudenterprise

The problem for AWS developers

AWS is massive. Over 200 services, each with its own CLI commands, API surface, and documentation. When you are building on AWS, you spend a significant chunk of time context-switching between your editor, the AWS console, CloudWatch logs, and documentation pages.

MCP servers solve this by bringing AWS-related capabilities directly into your AI coding assistant. Instead of copying CloudFormation templates from docs or manually checking container logs, your AI assistant can interact with infrastructure tools, container runtimes, and monitoring systems inside the same conversation where you write code.

There is no single "AWS MCP server" in the current ecosystem that covers all 200+ services. Instead, you build an AWS-focused stack by combining servers that cover the key layers: containers, orchestration, cloud platforms, and observability.

The servers that matter for AWS workflows

Here is what is available in the StackMCP catalog for AWS-adjacent work:

Server Tools Tokens Official What it covers
Docker MCP 14 7,210 No Containers, images, volumes, networks
Kubernetes MCP 12 6,180 No Pods, deployments, services (EKS-compatible)
Cloudflare MCP 18 9,270 Yes Workers, KV, R2, D1, DNS
Grafana MCP 43 22,145 No Dashboards, metrics, alerts, incidents
Datadog MCP 15 7,725 Yes Metrics, logs, events, monitors
Sentry MCP 8 4,120 Yes Error tracking, performance monitoring
GitHub MCP 20 10,300 Yes Repos, PRs, Actions (CI/CD pipelines)

Docker MCP for ECS and local development

If you deploy to ECS, ECS Fargate, or use Docker Compose locally, Docker MCP is essential. It exposes 14 tools for managing containers, images, volumes, and networks. Your AI assistant can build images, inspect running containers, check logs, and stop services without you opening a terminal.

Practical use cases:

  • Ask the AI to build and tag a Docker image for your Lambda container runtime
  • Debug a failing ECS task by inspecting container logs inline
  • Manage local Docker Compose stacks that mirror your AWS environment

Docker MCP requires shell: true permissions, so it can execute Docker commands directly. Token cost is moderate at 7,210 tokens.

Kubernetes MCP for EKS

If your AWS workloads run on EKS, Kubernetes MCP connects your AI assistant to your cluster via kubectl. It handles pods, deployments, services, and configurations. With 1,318 GitHub stars and active maintenance, it is the most adopted K8s MCP server.

This is where MCP shines for AWS developers. Instead of writing kubectl commands from memory, you describe what you want:

  • "Scale the payment-service deployment to 5 replicas"
  • "Show me all pods in the staging namespace that are not ready"
  • "Create a service of type LoadBalancer for the API deployment"

The AI translates your intent into the right kubectl operations. At 6,180 tokens, it is a reasonable addition to any AWS stack.

Monitoring: Grafana MCP vs Datadog MCP

Most AWS teams use either Grafana (often with CloudWatch as a data source) or Datadog for observability. Both have MCP servers.

Grafana MCP is the heavier option at 43 tools and 22,145 tokens. It can search dashboards, query data sources, manage alerts, and view incidents. If your Grafana instance pulls from CloudWatch, this gives your AI assistant indirect access to AWS metrics.

Datadog MCP is leaner at 15 tools and 7,725 tokens. It covers dashboards, metrics, logs, events, and monitors. If your AWS infrastructure reports to Datadog, this is the more token-efficient choice.

Pick one, not both. Running both monitoring servers would consume nearly 30,000 tokens on tool descriptions alone. That is 15% of Claude's context window before you even start coding.

GitHub MCP for Infrastructure as Code

GitHub MCP is not AWS-specific, but it is critical for any AWS developer using infrastructure-as-code. If your CloudFormation templates, CDK stacks, or Terraform configs live in GitHub, this server lets your AI assistant create PRs for infrastructure changes, review diffs, and manage CI/CD workflows through GitHub Actions.

At 20 tools and 10,300 tokens, it is a significant investment. But for teams that use GitHub Actions to deploy to AWS, it closes the loop between writing infrastructure code and shipping it.

Here is a practical stack for an AWS developer, optimized for token budget:

Server Tokens Why
Docker MCP 7,210 Container management for ECS/local dev
Kubernetes MCP 6,180 EKS cluster management
Datadog MCP 7,725 Monitoring and logs
GitHub MCP 10,300 IaC PRs and CI/CD
Total 31,415 ~16% of context window

This stack uses about 16% of Claude's 200K token context window for tool descriptions, leaving plenty of room for your actual code and conversation. If you do not use EKS, drop Kubernetes MCP and save 6,180 tokens. If you use Grafana instead of Datadog, swap accordingly but be aware of the higher token cost.

For a lighter alternative, drop the monitoring server entirely and add it only when debugging production issues:

Server Tokens
Docker MCP 7,210
GitHub MCP 10,300
Context7 MCP 1,030
Total 18,540

Context7 MCP is only 1,030 tokens but gives your AI assistant access to up-to-date documentation for AWS SDKs, CDK, and any library you use. At 213K weekly downloads, it is the most popular MCP server for a reason.

Setting up the config

Use the StackMCP config generator to create the JSON config for your editor. It generates the right format for Claude Code, Cursor, VS Code, Windsurf, Claude Desktop, and Continue.

For Claude Code, you can also add servers directly from the CLI:

claude mcp add docker -- npx -y docker-mcp
claude mcp add kubernetes -- npx -y mcp-server-kubernetes
claude mcp add datadog -e DD_API_KEY=your_key -e DD_APP_KEY=your_app_key -- npx -y datadog-mcp-server
claude mcp add github -e GITHUB_PERSONAL_ACCESS_TOKEN=your_token -- npx -y @modelcontextprotocol/server-github

For detailed setup instructions, see How to set up MCP in Claude Code or the guide for Cursor.

What is missing from the ecosystem

The biggest gap for AWS developers today is the lack of a comprehensive, official AWS MCP server. There is no single server that wraps the AWS SDK to give your AI assistant direct access to S3 buckets, Lambda functions, DynamoDB tables, or CloudFormation stacks.

This will likely change. AWS has adopted MCP as a standard, and the community is building rapidly. In the meantime, the combination of Docker, Kubernetes, monitoring, and GitHub servers covers the most common AWS workflows effectively.

For infrastructure-as-code specifically, you can work around the gap by keeping your CDK or CloudFormation templates in your project and letting the AI assistant edit them directly through the editor's built-in file access, then using GitHub MCP to create the PR and trigger deployment.

Next steps

Related Stacks

Related Servers