mcp-rag-search-server
2
This project is a custom Model Calling Protocol (MCP) server providing RAG capabilities and multiple web search tools. It integrates Google's Gemini 2.0 for AI-powered searches and Linkup for traditional search methods.
MCP Server with RAG and Multi-Search
A custom MCP (Model Calling Protocol) server that provides RAG (Retrieval-Augmented Generation) capabilities using LlamaIndex and multiple web search options via Google's Gemini 2.0 API and Linkup.
Features
- RAG workflow using local documents
- Multiple web search capabilities:
- Google's Gemini 2.0 for advanced AI-powered search
- Linkup for traditional web search
- Built with FastMCP
Setup
Prerequisites
- Python 3.8 or higher
- Ollama installed locally with DeepSeek models (or modify to use your preferred model)
- Gemini API key (get one at https://ai.google.dev/)
- Linkup API key (optional)
Installation
-
Clone this repository:
git clone <repository-url> cd own-mcp-server
-
Install required dependencies:
pip install -r requirements.txt
-
Set up environment variables (create a
.env
file):# Required API keys GEMINI_API_KEY=your_gemini_api_key_here LINKUP_API_KEY=your_linkup_api_key_here # Optional configurations OLLAMA_HOST=http://localhost:11434
-
Add documents to the
data
directory (will be created automatically if it doesn't exist)
Running the Server
Start the server with:
python server.py
Usage
The server provides the following tools:
web_search
: Uses the best available search method (Gemini 2.0 preferred, fallback to Linkup)gemini_search
: Search using Google's Gemini 2.0 AIlinkup_search
: Search using Linkuprag
: Query your local documents using RAG
Required Libraries
This project uses:
- llama-index - Core RAG functionality
- ollama - Local LLM integration
- Google Generative AI SDK - Gemini 2.0 integration
- Linkup SDK - Web search capabilities
- FastMCP - MCP server implementation
- Python-dotenv - Environment management
- nest-asyncio - Async support
Troubleshooting
If you encounter issues:
- Make sure Ollama is properly installed and running
- Pull the DeepSeek model:
ollama pull deepseek-r1:1.5b
- If you encounter Python 3.13 compatibility issues, consider downgrading to Python 3.11 or 3.10
- Verify your API keys are correct and have the necessary permissions
- For Gemini 2.0 issues, make sure your API key has access to the latest models