kubectl-mcp-server

kubectl-mcp-server

517

Kubectl MCP Tool is a Model Context Protocol server designed for Kubernetes, facilitating natural language interaction with AI assistants to manage clusters efficiently. It offers extensive features including core Kubernetes operations, natural language processing, monitoring, security, and diagnostics.

Kubectl MCP Tool

A Model Context Protocol (MCP) server for Kubernetes that enables AI assistants like Claude, Cursor, and others to interact with Kubernetes clusters through natural language.

License: MIT Python Kubernetes MCP PyPI version

🎥 Live Demo - Watch kubectl-mcp-tool in Action with Claude!

Claude MCP

🎥 Live Demo - Watch kubectl-mcp-tool in Action with Cursor!

Cursor MCP

🎥 Live Demo - Watch kubectl-mcp-tool in Action with Windsurf!

Windsurf MCP

Features

Core Kubernetes Operations

  • Connect to a Kubernetes cluster
  • List and manage pods, services, deployments, and nodes
  • Create, delete, and describe pods and other resources
  • Get pod logs and Kubernetes events
  • Support for Helm v3 operations (installation, upgrades, uninstallation)
  • kubectl explain and api-resources support
  • Choose namespace for next commands (memory persistence)
  • Port forward to pods
  • Scale deployments and statefulsets
  • Execute commands in containers
  • Manage ConfigMaps and Secrets
  • Rollback deployments to previous versions
  • Ingress and NetworkPolicy management
  • Context switching between clusters

Natural Language Processing

  • Process natural language queries for kubectl operations
  • Context-aware commands with memory of previous operations
  • Human-friendly explanations of Kubernetes concepts
  • Intelligent command construction from intent
  • Fallback to kubectl when specialized tools aren't available
  • Mock data support for offline/testing scenarios
  • Namespace-aware query handling

Monitoring

  • Cluster health monitoring
  • Resource utilization tracking
  • Pod status and health checks
  • Event monitoring and alerting
  • Node capacity and allocation analysis
  • Historical performance tracking
  • Resource usage statistics via kubectl top
  • Container readiness and liveness tracking

Security

  • RBAC validation and verification
  • Security context auditing
  • Secure connections to Kubernetes API
  • Credentials management
  • Network policy assessment
  • Container security scanning
  • Security best practices enforcement
  • Role and ClusterRole management
  • ServiceAccount creation and binding
  • PodSecurityPolicy analysis
  • RBAC permissions auditing
  • Security context validation

Diagnostics

  • Cluster diagnostics and troubleshooting
  • Configuration validation
  • Error analysis and recovery suggestions
  • Connection status monitoring
  • Log analysis and pattern detection
  • Resource constraint identification
  • Pod health check diagnostics
  • Common error pattern identification
  • Resource validation for misconfigurations
  • Detailed liveness and readiness probe validation

Advanced Features

  • Multiple transport protocols support (stdio, SSE)
  • Integration with multiple AI assistants
  • Extensible tool framework
  • Custom resource definition support
  • Cross-namespace operations
  • Batch operations on multiple resources
  • Intelligent resource relationship mapping
  • Error explanation with recovery suggestions
  • Volume management and identification

Architecture

Model Context Protocol (MCP) Integration

The Kubectl MCP Tool implements the Model Context Protocol (MCP), enabling AI assistants to interact with Kubernetes clusters through a standardized interface. The architecture consists of:

  1. MCP Server: A compliant server that handles requests from MCP clients (AI assistants)
  2. Tools Registry: Registers Kubernetes operations as MCP tools with schemas
  3. Transport Layer: Supports stdio, SSE, and HTTP transport methods
  4. Core Operations: Translates tool calls to Kubernetes API operations
  5. Response Formatter: Converts Kubernetes responses to MCP-compliant responses

Request Flow

Request Flow

Dual Mode Operation

The tool operates in two modes:

  1. CLI Mode: Direct command-line interface for executing Kubernetes operations
  2. Server Mode: Running as an MCP server to handle requests from AI assistants

Installation

For detailed installation instructions, please see the .

You can install kubectl-mcp-tool directly from PyPI:

pip install kubectl-mcp-tool

For a specific version:

pip install kubectl-mcp-tool==1.1.1

The package is available on PyPI: https://pypi.org/project/kubectl-mcp-tool/1.1.1/

Prerequisites

  • Python 3.9+
  • kubectl CLI installed and configured
  • Access to a Kubernetes cluster
  • pip (Python package manager)

Global Installation

# Install latest version from PyPI
pip install kubectl-mcp-tool

# Or install development version from GitHub
pip install git+https://github.com/rohitg00/kubectl-mcp-server.git

Local Development Installation

# Clone the repository
git clone https://github.com/rohitg00/kubectl-mcp-server.git
cd kubectl-mcp-server

# Install in development mode
pip install -e .

Verifying Installation

After installation, verify the tool is working correctly:

# Check CLI mode
kubectl-mcp --help

Note: This tool is designed to work as an MCP server that AI assistants connect to, not as a direct kubectl replacement. The primary command available is kubectl-mcp serve which starts the MCP server.

Usage with AI Assistants

Using the MCP Server

The MCP Server (kubectl_mcp_tool.mcp_server) is a robust implementation built on the FastMCP SDK that provides enhanced compatibility across different AI assistants:

Note: If you encounter any errors with the MCP Server implementation, you can fall back to using the minimal wrapper by replacing kubectl_mcp_tool.mcp_server with kubectl_mcp_tool.minimal_wrapper in your configuration. The minimal wrapper provides basic capabilities with simpler implementation.

  1. Direct Configuration

    {
      "mcpServers": {
        "kubernetes": {
          "command": "python",
          "args": ["-m", "kubectl_mcp_tool.mcp_server"],
          "env": {
            "KUBECONFIG": "/path/to/your/.kube/config",
            "PATH": "/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin",
            "MCP_LOG_FILE": "/path/to/logs/debug.log",
            "MCP_DEBUG": "1"
          }
        }
      }
    }
    
  2. Key Environment Variables

    • MCP_LOG_FILE: Path to log file (recommended to avoid stdout pollution)
    • MCP_DEBUG: Set to "1" for verbose logging
    • MCP_TEST_MOCK_MODE: Set to "1" to use mock data instead of real cluster
    • KUBECONFIG: Path to your Kubernetes config file
    • KUBECTL_MCP_LOG_LEVEL: Set to "DEBUG", "INFO", "WARNING", or "ERROR"
  3. Testing the MCP Server You can test if the server is working correctly with:

    python -m kubectl_mcp_tool.simple_ping
    

    This will attempt to connect to the server and execute a ping command.

    Alternatively, you can directly run the server with:

    python -m kubectl_mcp_tool
    

Claude Desktop

Add the following to your Claude Desktop configuration at ~/.config/claude/mcp.json (Windows: %APPDATA%\Claude\mcp.json):

{
  "mcpServers": {
    "kubernetes": {
      "command": "python",
      "args": ["-m", "kubectl_mcp_tool.mcp_server"],
      "env": {
        "KUBECONFIG": "/path/to/your/.kube/config"
      }
    }
  }
}

Cursor AI

Add the following to your Cursor AI settings under MCP by adding a new global MCP server:

{
  "mcpServers": {
    "kubernetes": {
      "command": "python",
      "args": ["-m", "kubectl_mcp_tool.mcp_server"],
      "env": {
        "KUBECONFIG": "/path/to/your/.kube/config",
        "PATH": "/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/homebrew/bin"
      }
    }
  }
}

Save this configuration to ~/.cursor/mcp.json for global settings.

Note: Replace /path/to/your/.kube/config with the actual path to your kubeconfig file. On most systems, this is ~/.kube/config.

Windsurf

Add the following to your Windsurf configuration at ~/.config/windsurf/mcp.json (Windows: %APPDATA%\WindSurf\mcp.json):

{
  "mcpServers": {
    "kubernetes": {
      "command": "python",
      "args": ["-m", "kubectl_mcp_tool.mcp_server"],
      "env": {
        "KUBECONFIG": "/path/to/your/.kube/config"
      }
    }
  }
}

Automatic Configuration

For automatic configuration of all supported AI assistants, run the provided installation script:

bash install.sh

This script will:

  1. Install the required dependencies
  2. Create configuration files for Claude, Cursor, and WindSurf
  3. Set up the correct paths and environment variables
  4. Test your Kubernetes connection

Prerequisites

  1. kubectl installed and in your PATH
  2. A valid kubeconfig file
  3. Access to a Kubernetes cluster
  4. Helm v3 (optional, for Helm operations)

Examples

List Pods

List all pods in the default namespace

Deploy an Application

Create a deployment named nginx-test with 3 replicas using the nginx:latest image

Check Pod Logs

Get logs from the nginx-test pod

Port Forwarding

Forward local port 8080 to port 80 on the nginx-test pod

Development

# Clone the repository
git clone https://github.com/rohitg00/kubectl-mcp-server.git
cd kubectl-mcp-server

# Install dependencies
pip install -r requirements.txt

# Install in development mode
pip install -e .

# Run the MCP server
python -m kubectl_mcp_tool

# Run tests
python -m python_tests.run_mcp_tests

Project Structure

├── kubectl_mcp_tool/         # Main package
│   ├── __init__.py           # Package initialization
│   ├── __main__.py           # Package entry point
│   ├── cli.py                # CLI entry point
│   ├── mcp_server.py         # MCP server implementation
│   ├── mcp_kubectl_tool.py   # Main kubectl MCP tool implementation
│   ├── natural_language.py   # Natural language processing
│   ├── diagnostics.py        # Diagnostics functionality
│   ├── core/                 # Core functionality 
│   ├── security/             # Security operations
│   ├── monitoring/           # Monitoring functionality
│   ├── utils/                # Utility functions
│   └── cli/                  # CLI functionality components
├── python_tests/             # Test suite
│   ├── run_mcp_tests.py      # Test runner script
│   ├── mcp_client_simulator.py # MCP client simulator for mock testing
│   ├── test_utils.py         # Test utilities
│   ├── test_mcp_core.py      # Core MCP tests
│   ├── test_mcp_security.py  # Security tests
│   ├── test_mcp_monitoring.py # Monitoring tests
│   ├── test_mcp_nlp.py       # Natural language tests
│   ├── test_mcp_diagnostics.py # Diagnostics tests
│   └── mcp_test_strategy.md  # Test strategy documentation
├── docs/                     # Documentation
│   ├── README.md             # Documentation overview
│   ├── INSTALLATION.md       # Installation guide
│   ├── integration_guide.md  # Integration guide
│   ├── cursor/               # Cursor integration docs
│   ├── windsurf/             # Windsurf integration docs
│   └── claude/               # Claude integration docs
├── compatible_servers/       # Compatible MCP server implementations
│   ├── cursor/               # Cursor-compatible servers
│   ├── windsurf/             # Windsurf-compatible servers
│   ├── minimal/              # Minimal server implementations
│   └── generic/              # Generic MCP servers
├── requirements.txt          # Python dependencies
├── setup.py                  # Package setup script
├── pyproject.toml            # Project configuration
├── MANIFEST.in               # Package manifest
├── mcp_config.json           # Sample MCP configuration
├── run_server.py             # Server runner script
├── LICENSE                   # MIT License
├── CHANGELOG.md              # Version history
├── .gitignore                # Git ignore file
├── install.sh                # Installation script
├── publish.sh                # PyPI publishing script
└── start_mcp_server.sh       # Server startup script

MCP Server Tools

The MCP Server implementation (kubectl_mcp_tool.mcp_server) provides a comprehensive set of 26 tools that can be used by AI assistants to interact with Kubernetes clusters:

Core Kubernetes Resource Management

  • get_pods - Get all pods in the specified namespace
  • get_namespaces - Get all Kubernetes namespaces
  • get_services - Get all services in the specified namespace
  • get_nodes - Get all nodes in the cluster
  • get_configmaps - Get all ConfigMaps in the specified namespace
  • get_secrets - Get all Secrets in the specified namespace
  • get_deployments - Get all deployments in the specified namespace
  • create_deployment - Create a new deployment
  • delete_resource - Delete a Kubernetes resource
  • get_api_resources - List Kubernetes API resources
  • kubectl_explain - Explain a Kubernetes resource using kubectl explain

Helm Operations

  • install_helm_chart - Install a Helm chart
  • upgrade_helm_chart - Upgrade a Helm release
  • uninstall_helm_chart - Uninstall a Helm release

Security Operations

  • get_rbac_roles - Get all RBAC roles in the specified namespace
  • get_cluster_roles - Get all cluster-wide RBAC roles

Monitoring and Diagnostics

  • get_events - Get all events in the specified namespace
  • get_resource_usage - Get resource usage statistics via kubectl top
  • health_check - Check cluster health by pinging the API server
  • get_pod_events - Get events for a specific pod
  • check_pod_health - Check the health status of a pod
  • get_logs - Get logs from a pod

Cluster Management

  • switch_context - Switch current kubeconfig context
  • get_current_context - Get current kubeconfig context
  • port_forward - Forward local port to pod port
  • scale_deployment - Scale a deployment

All tools return structured data with success/error information and relevant details, making it easy for AI assistants to process and understand the responses.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the file for details.

Verified on MseeP