Kontxt

Kontxt

5

The Kontxt MCP Server aims to assist in codebase indexing for AI clients by connecting to local code repositories. It provides tools for detailed code context analysis and supports different protocols for versatile operation. The server tracks token usage to ensure efficient API consumption.

Kontxt MCP Server

A Model Context Protocol (MCP) server designed to facilitate codebase indexing. It connects to a local code repository, providing a get_codebase_context tool for AI clients like Cursor and Claude Desktop. The server supports SSE and stdio transport protocols and allows user-attached files/docs for targeted analysis. It tracks token usage and provides a detailed analysis of API consumption, with configurable token limits for context generation.

Features

  • Connects to user-specified local code repositories
  • Supports tools like list_repository_structure, read_files, grep_codebase for code understanding
  • Supports SSE and stdio transport protocols
  • Tracks token usage for API consumption analysis

Setup

  1. Clone the server code
  2. Create and activate a Python virtual environment
  3. Install dependencies from requirements.txt
  4. Install the tree command
  5. Configure the Google Gemini API key in a .env file or via command-line argument

Running as a Standalone Server (Recommended)

Supports running in SSE mode for standalone operation or with stdio transport for client management.

Command Line Arguments

  • --repo-path PATH: Required path to code repository
  • --gemini-api-key KEY: Google Gemini API Key
  • --token-threshold NUM: Target maximum token count
  • --gemini-model NAME: Specific Gemini model to use
  • --transport {stdio,sse}: Transport protocol
  • --host HOST: Host address for SSE server
  • -port PORT: Port for SSE server

Basic Usage

Example queries include asking about codebase purpose, authentication system, data flow, and more.

Context Attachment

Allows inclusion of referenced files/context in queries for detailed analysis.

Token Usage Tracking

Logs token usage during operations to help monitor and optimize API usage.