Jump to section

Jump to section

Jump to section

Understanding MCP

The Model Context Protocol (MCP) is an open, JSON-RPC-based standard that serves as a universal adapter between large language models (LLMs) and external data sources or capabilities.

Kubernetes MCP Server

Large Language Models (LLMs) are increasingly being used to manage infrastructure through natural language interfaces. One area where this is becoming more common is Kubernetes, where LLMs can help interact with clusters by exposing those clusters as tools.

To make this possible, the Kubernetes environment needs to be wrapped in a way that LLMs can understand. This is where the Model Context Protocol (MCP) comes in. MCP provides a standard interface that LLMs use to discover and interact with tools, including those that control Kubernetes.

This article outlines best practices for implementing a Kubernetes MCP server. It includes setup, security, managing multiple clusters, and common troubleshooting patterns.

What is a Kubernetes MCP server?

A Kubernetes MCP server is a bridge that connects AI assistants like Claude Desktop to your Kubernetes clusters. It translates natural language requests into Kubernetes API calls and returns the results in a structured format that the AI can understand.

The Model Context Protocol (MCP) defines how tools are described and executed. Each tool corresponds to a specific Kubernetes operation, like listing pods or scaling deployments. The tools are described using JSON Schema, which tells the AI what parameters each tool expects.

When you ask an AI assistant to "list all pods in the default namespace," it selects the appropriate tool from the MCP server and passes the necessary parameters. The server then makes the corresponding Kubernetes API call and returns the results.

Unlike using kubectl directly, an MCP server provides a controlled interface with defined operations, making it safer and more predictable for AI interactions.

[AI Assistant][MCP Protocol][Kubernetes MCP Server][Kubernetes API Server]

This architecture lets AI assistants perform Kubernetes operations without direct access to your cluster or command line.

Core requirements for setting up a Kubernetes MCP server

Before setting up a Kubernetes MCP server, you'll need several components in place:

  • Kubernetes cluster: A running cluster that the MCP server can connect to

  • kubectl: Installed and configured to verify cluster access before setting up MCP

  • kubeconfig file: Contains cluster connection details and authentication credentials

  • AI assistant: An assistant that supports the MCP protocol, like Claude Desktop

For hardware, a small virtual machine with 2 CPU cores and 4GB of RAM is usually enough for testing. Production environments might need more resources depending on how many requests you expect.

The MCP server typically runs as a container, so you'll also need Docker or another container runtime. The server reads the kubeconfig file to authenticate with your Kubernetes cluster, so this file needs to be mounted into the container.

Security is important when setting up an MCP server. The kubeconfig file contains sensitive credentials, so it should be mounted as read-only and the server should run with minimal permissions.

Step-by-step setup guide

Setting up a Kubernetes MCP server involves preparing your authentication, running the server container, and connecting it to your AI assistant. Here's how to do it:

1. Prepare your kubeconfig

First, create a service account with appropriate permissions:

kubectl create serviceaccount mcp-server -n default
kubectl create clusterrolebinding mcp-server-binding \
  --clusterrole=view \
  --serviceaccount

This creates a service account with read-only access to your cluster. For production, you might want to create a more restrictive role.

Extract the kubeconfig for this service account:

SECRET_NAME=$(kubectl get sa mcp-server -n default -o jsonpath="{.secrets[0].name}")
kubectl get secret $SECRET_NAME -n default -o jsonpath="{.data.token}" | \
  base64 --decode

Create a kubeconfig file that uses this token:

kubectl config set-credentials mcp-server --token=$(cat ./token)
kubectl config set-context mcp-server --cluster=$(kubectl config current-context) --user=mcp-server
kubectl config use-context mcp-server
kubectl config view --minify

2. Run the MCP server container

Pull and run the Kubernetes MCP server container:

docker run --rm \
  -v $PWD/kubeconfig:/kubeconfig:ro \
  -e KUBECONFIG=/kubeconfig \
  -p 8080:8080 \
  ghcr.io/manusa/kubernetes-mcp-server:latest \
  --kubeconfig /kubeconfig \
  --sse-port 8080

This command:

  • Mounts your kubeconfig file as read-only

  • Sets the KUBECONFIG environment variable

  • Exposes port 8080 for the server

  • Starts the server with the specified kubeconfig

3. Connect to Claude Desktop

To use the MCP server with Claude Desktop, edit your Claude Desktop configuration file:

{
  "mcpServers": {
    "kubernetes": {
      "command": "npx",
      "args": ["-y", "kubernetes-mcp-server@latest"]
    }
  }
}

This tells Claude Desktop to start the MCP server using npx when needed. You can also point it to your already running server if you prefer.

4. Test the connection

In Claude Desktop, try a simple command like:

"List all pods in the default namespace"

If everything is set up correctly, Claude should use the MCP server to retrieve and display the pods.

Security best practices

When implementing a Kubernetes MCP server, security is crucial since the server has access to your Kubernetes cluster. Here are key security practices:

Use minimal RBAC permissions

The service account used by the MCP server should have only the permissions it needs. For read-only operations, use the built-in "view" ClusterRole:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: mcp-server-readonly
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view
subjects:
- kind: ServiceAccount
  name: mcp-server
  namespace

If you need write access for certain operations, create a custom role with only the necessary permissions.

Protect the kubeconfig

The kubeconfig file contains credentials for accessing your cluster. To protect it:

  • Mount it as read-only in the container

  • Don't include it in your Docker image

  • Rotate credentials regularly

  • Use a dedicated service account for the MCP server

Enable logging and auditing

Track all operations performed by the MCP server by enabling logging. This helps you monitor what the AI assistant is doing and troubleshoot issues.

Basic logging includes:

  • Which tools were called

  • What parameters were passed

  • Whether the operation succeeded or failed

  • Which namespace and context were used

For more advanced monitoring, you can integrate with systems like Prometheus and Grafana to track metrics like request rate and error count.

Managing multiple Kubernetes clusters

An MCP server can work with multiple Kubernetes clusters, which is useful when you have separate environments like development, staging, and production.

Using contexts in a single kubeconfig

You can include multiple clusters in one kubeconfig file, each with its own context:

contexts:
- name: dev-cluster
  context:
    cluster: development
    user: mcp-server
    namespace: default
- name: prod-cluster
  context:
    cluster: production
    user: mcp-server
    namespace: default
current-context

The MCP server uses the current-context by default, but some implementations allow switching contexts using a specific tool.

Running separate MCP servers

Another approach is to run a separate MCP server for each cluster:

docker run -v $PWD/kubeconfig-dev:/kubeconfig:ro -p 8081:8080 kubernetes-mcp-server
docker run -v $PWD/kubeconfig-prod:/kubeconfig:ro -p 8082

This provides better isolation between environments but requires managing multiple servers.

When using Claude Desktop with multiple MCP servers, you can configure each one with a different name:

{
  "mcpServers": {
    "kubernetes-dev": {
      "command": "npx",
      "args": ["kubernetes-mcp-server", "--kubeconfig", "/path/to/kubeconfig-dev"]
    },
    "kubernetes-prod": {
      "command": "npx",
      "args": ["kubernetes-mcp-server", "--kubeconfig", "/path/to/kubeconfig-prod"]
    }
  }
}

Then you can specify which environment to use in your prompts: "Using the dev cluster, list all pods in the default namespace."

Troubleshooting common issues

When using a Kubernetes MCP server, you might encounter some common issues. Here's how to diagnose and fix them:

Connection problems

If the MCP server can't connect to your Kubernetes cluster, check:

  • Is the kubeconfig file mounted correctly?

  • Are the cluster credentials valid?

  • Is the cluster reachable from where the MCP server is running?

Test with kubectl using the same kubeconfig:

kubectl --kubeconfig=/path/to/kubeconfig get

Permission errors

If operations fail with "forbidden" errors, the service account doesn't have the necessary permissions. Review the RBAC configuration and add any missing permissions.

For example, to allow listing pods:

rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["list"

AI assistant integration issues

If the AI assistant can't find or use the MCP server:

  • Check that the server is running and accessible

  • Verify the configuration in Claude Desktop or other AI tool

  • Make sure the server exposes the tools the AI is trying to use

You can test the server directly using curl:

curl

This should return metadata about the available tools.

Using the Kubernetes MCP server with AI tools

Once your MCP server is set up, you can use it with AI assistants to manage your Kubernetes clusters using natural language.

Claude Desktop examples

Claude Desktop works well with the Kubernetes MCP server. Here are some examples of what you can ask:

  • "List all pods in the kube-system namespace"

  • "Show me the logs for the nginx pod"

  • "Scale the frontend deployment to 3 replicas"

  • "Create a new namespace called 'testing'"

Claude will translate these requests into tool calls to the MCP server, which then performs the operations on your cluster.

Goose CLI integration

Goose CLI is another tool that works with MCP servers. It's a command-line interface that can run automated workflows.

To use Goose CLI with the Kubernetes MCP server, add it to your config.yaml:

extensions:
  kubernetes:
    command: npx
    args

Then you can create workflows that interact with your Kubernetes clusters:

run:
  - tool: pods_list_in_namespace
    input:
      namespace: default
  - tool: pods_log
    input:
      name: nginx
      namespace

This workflow lists all pods in the default namespace, then gets logs from the nginx pod.

Implementing a Kubernetes MCP Server

Implementing a Kubernetes MCP server creates a bridge between AI assistants and your Kubernetes clusters. This enables natural language interaction with your infrastructure while maintaining security and control.

Key takeaways from this guide:

  • Security first: Use minimal RBAC permissions and protect your kubeconfig

  • Start simple: Begin with read-only access before enabling write operations

  • Test thoroughly: Verify that the MCP server works as expected before using it in production

  • Monitor activity: Enable logging to track what the AI assistant is doing

For organizations managing complex APIs alongside Kubernetes, tools like Stainless can help generate high-quality SDKs and MCP servers from OpenAPI specifications. This approach ensures consistent interfaces across different interaction methods.

As MCP technology evolves, we'll likely see more integration between AI assistants and infrastructure management tools, making Kubernetes more accessible to teams of all skill levels.

FAQs about Kubernetes MCP server

How does a Kubernetes MCP server differ from using kubectl directly?

A Kubernetes MCP server provides a structured interface with predefined operations that an AI can understand and use. Unlike kubectl, which offers complete control over a cluster, an MCP server limits operations to specific tools with defined parameters, making it safer for AI interaction.

Can I use a Kubernetes MCP server with AI assistants other than Claude?

Yes, any AI assistant that supports the Model Context Protocol (MCP) can work with a Kubernetes MCP server. This includes tools like Goose CLI and any other MCP-compatible clients, though configuration details may vary between platforms.

What are the security implications of giving an AI assistant access to my Kubernetes cluster?

The main security consideration is that the AI assistant can perform any operation allowed by the service account's permissions. By using proper RBAC controls and read-only access where possible, you can limit what the AI can do and prevent potentially harmful operations.

Featured MCP Resources

Essential events, guides and insights to help you master MCP server development.

Featured MCP Resources

Essential events, guides and insights to help you master MCP server development.

Featured MCP Resources

Essential events, guides and insights to help you master MCP server development.