mcp-server
This project provides a minimalistic example of building and deploying an MCP server using FastAPI and Chroma DB. It guides users on setting up the server, configuring API keys, populating a database with sample data, and running queries.
Simple MCP Server Example
A minimalistic example to demonstrate how to build and deploy a Model Context Protocol (MCP) server using FastAPI and Chroma DB.
Getting Started
Installation:
Clone this repo and install dependencies:
git clone https://github.com/YOUR_GITHUB_USERNAME/mcp-server.git
cd mcp-server
pip install -r requirements.txt
execution and testing
- Download and Extract the Package Download your MCP server ZIP file. Extract it to a convenient directory on your computer.
unzip mcp-server.zip cd mcp-server
- Setup Environment Create and activate a Python virtual environment (recommended):
python3 -m venv venv source venv/bin/activate
- Install Dependencies Install the required Python packages:
pip install -r requirements.txt
- Configure OpenAI API Key Open app.py in your favorite editor, and replace the placeholder with your OpenAI API key: python
openai_ef = embedding_functions.OpenAIEmbeddingFunction(api_key="YOUR_OPENAI_API_KEY") 5. Run the MCP Server Start the FastAPI server using Uvicorn:
uvicorn app:app --reload
The server should now be running at: http://127.0.0.1:8000 6. Testing your MCP Server Use a curl request or any API client (e.g., Postman):
curl -X POST "http://127.0.0.1:8000/v1/context/query"
-H "Content-Type: application/json"
-d '{"query": "your search query here", "top_k": 2}'
7.Populate ChromaDB (If empty) If your ChromaDB is empty, you'll first need to insert sample documents. Here's a minimal example of how to populate it directly from Python:
Create a file named populate.py in your project folder: import chromadb from chromadb.utils import embedding_functions
openai_ef = embedding_functions.OpenAIEmbeddingFunction(api_key="YOUR_OPENAI_API_KEY") client = chromadb.Client() collection = client.get_or_create_collection("mcp_collection", embedding_function=openai_ef)
Add sample data
collection.add( documents=[ "OpenAI develops powerful AI models.", "FastAPI is great for quick APIs.", "ChromaDB is a lightweight vector database." ], metadatas=[ {"source": "OpenAI Website"}, {"source": "FastAPI docs"}, {"source": "ChromaDB GitHub"} ], ids=["doc1", "doc2", "doc3"] )
print("Data added successfully.")
- Execute python populate.py Query the MCP Server Now that your ChromaDB has documents, query them using your running MCP server (uvicorn app:app --reload): Make a POST request via curl or Postman:
curl -X POST "http://127.0.0.1:8000/v1/context/query"
-H "Content-Type: application/json"
-d '{
"query": "tell me about vector databases",
"top_k": 2
}'
Sample response
You should get something like
{ "documents": [ "ChromaDB is a lightweight vector database.", "FastAPI is great for quick APIs." ], "metadata": [ {"source": "ChromaDB GitHub"}, {"source": "FastAPI docs"} ], "distances": [0.18, 0.32] }
#AI #MCP #VectorDB #FastAPI #ChromaDB #OpenAI #RAG #MLOps