Mcp-Server
This project implements a Model Context Protocol (MCP) server designed to provide large language models with real-time access to current documentation for Python libraries, solving outdated code suggestion issues. It dynamically fetches the latest library documentation using advanced tooling and integrates seamlessly with compatible clients.
MCP Server for Up-to-Date Library Documentation
This project implements a Model Context Protocol (MCP) server in Python. Its primary function is to provide Large Language Models (LLMs) like Anthropic's Claude with real-time access to the latest documentation for specified Python libraries (Langchain, LlamaIndex, OpenAI) before they generate code suggestions.
Problem Solved
LLMs often possess knowledge based on their training data cutoffs. This can lead to outdated code suggestions, especially for rapidly evolving libraries common in the AI/ML space. This MCP server addresses this challenge by acting as a tool that allows the LLM to dynamically fetch and incorporate the most current documentation snippets into its context before responding to coding queries.
Features
- MCP Standard: Implements the Model Context Protocol for seamless integration with compatible clients (e.g., Claude Desktop, Claude Code).
get_docs
Tool: Exposes a specific tool that searches official documentation sites.- Targeted Search: Uses the Serper API to perform site-specific Google searches, ensuring results come directly from the official docs for:
- Langchain (
python.langchain.com/docs
) - LlamaIndex (
docs.llamaindex.ai/en/stable
) - OpenAI (
platform.openai.com/docs
)
- Langchain (
- Content Fetching: Retrieves and parses the text content from the top search results using
httpx
andBeautifulSoup
. - Modern Tooling: Built with Python 3.11+,
asyncio
,FastMCP
, and managed using theuv
package manager.
Architecture Overview
This server functions as a specialized "toolbox" within the MCP ecosystem:
- An MCP Host (e.g., Claude Desktop, IDE with Claude Code) initiates a request requiring coding assistance for a supported library.
- The MCP Client within the Host connects to this running MCP Server.
- The LLM, recognizing the need for potentially up-to-date information, decides to use the
get_docs
tool provided by this server. - The Client invokes the
get_docs
tool on this server, passing the user's query and the target library. - This MCP Server constructs a site-specific search query (e.g.,
site:python.langchain.com/docs <user_query>
). - It queries the Serper API to get the top documentation page links.
- It fetches the content of these pages using
httpx
and extracts the relevant text usingBeautifulSoup
. - The extracted text (context) is returned to the MCP Client/Host.
- The LLM uses this fresh context alongside the original prompt to generate a more accurate and up-to-date response/code suggestion.
Prerequisites
- Python 3.11+
uv
Package Manager: Install from Astral.sh.- Serper API Key: Obtain a free or paid key from serper.dev.
- Node.js/npx: Required only if you plan to use the MCP Inspector for debugging.
Installation & Setup
-
Clone the Repository (if applicable):
git clone <your-repository-url> cd <your-repository-name>
-
Initialize Project (if starting fresh):
# If you haven't cloned a repo with pyproject.toml uv init mcp-server cd mcp-server
-
Create and Activate Virtual Environment:
uv venv # Activate (Linux/macOS): source .venv/bin/activate # Activate (Windows PowerShell): . \.venv\Scripts\Activate.ps1 # Activate (Windows Cmd): .\.venv\Scripts\activate.bat
-
Install Dependencies:
uv add "mcp[cli]" httpx python-dotenv bs4 # Or, if dependencies are listed in pyproject.toml: # uv sync
Configuration
-
Create a file named
.env
in the root directory of the project. -
Add your Serper API key to this file:
SERPER_API_KEY=your_actual_serper_api_key_here
(The
.gitignore
file is already configured to prevent committing this file)
Usage
-
Run the MCP Server: Make sure your virtual environment is activated.
uv run main.py
The server will start and listen for connections via standard input/output (stdio), as configured in
main.py
. -
Integrate with MCP Clients:
-
Claude Desktop:
- Go to Settings > Developer > Edit Configuration.
- Add an entry under
mcpServers
. You'll need to provide the full path to youruv
executable and specify the command arguments. - Example structure (adjust paths accordingly):
{ "mcpServers": [ { "name": "docs-helper", // Or any name you prefer "command": [ "/full/path/to/your/.venv/bin/python", // Or full path to uv binary "-m", // If using python -m uv ... "uv", "run", "main.py" ], "workingDirectory": "/full/path/to/your/mcp-server/project" } ] }
- Restart Claude Desktop. A tool hammer icon should appear.
-
Claude Code (CLI):
- Use the
claude mcp add
command interactively or with flags. - Example interactive session prompts:
- Server Name:
documentation-fetcher
(or your choice) - Project Type:
local
- Command: Specify the full path to
uv
and arguments, similar to Claude Desktop (e.g.,/full/path/to/uv run main.py
within the project directory). - Working Directory:
/full/path/to/your/mcp-server/project
- Server Name:
- Use
claude mcp list
to verify. - Run
claude
- the tool should be listed.
- Use the
-
Refer to the official Anthropic MCP documentation for the most up-to-date client configuration details.
-
Development & Debugging
The MCP Inspector is a valuable tool for testing your server's capabilities without needing a full client integration.
- Ensure Node.js and npx are installed.
- Run the inspector, pointing it to your server's run command:
# Ensure your .venv is activated first npx @model-context-protocol/inspector "uv run main.py"
- Open your web browser to
http://localhost:5173
. - Connect to the server via the Inspector interface.
- Navigate to the "Tools" section, select
get_docs
, provide test values forquery
andlibrary
, and click "Run Tool" to see the output.
License
This project is licensed under the MIT License - see the file for details. (You'll need to add a LICENSE file with the MIT license text if you choose this)