mcp-server-trail-project
2
This project demonstrates the integration of a client application with a MCP server to facilitate AI interactions using the Gemini AI model. It enables task execution like posting on Twitter and performing calculations through an AI chat interface.
Model Context Protocol (MCP) Server Project
This project demonstrates integration between a client application and an MCP (Model Context Protocol) server, allowing for AI-powered interactions with tool execution capabilities.
Project Overview
This application consists of two main components:
- A client that connects to Google's Gemini AI model and an MCP server
- An MCP server that registers and provides tools for the AI model to use
The system allows users to interact with the Gemini AI model through a command-line interface. The AI can respond to user queries and execute specialized tools hosted on the MCP server, such as posting tweets or performing calculations.
Architecture
├── client/ # Client application
│ ├── .env # Environment variables for client
│ ├── index.js # Client implementation
│ └── package.json # Client dependencies
└── server/ # MCP server
├── .env # Environment variables for server
├── index.js # Server implementation
├── mcp.tool.js # Tool implementations
└── package.json # Server dependencies
Features
- AI chat interface using Google's Gemini model
- Tool execution through MCP protocol
- Available tools:
addTwoNumbers
: Performs addition of two numberscreatePost
: Creates a post on Twitter/X
Setup and Installation
Prerequisites
- Node.js (v14 or higher)
- npm or yarn
- Twitter/X API credentials
Server Setup
- Navigate to the server directory:
cd server
- Install dependencies:
npm install
- Configure the
.env
file with your Twitter API credentials:TWITTER_API_KEY=your_api_key TWITTER_API_KEY_SECRET=your_api_secret TWITTER_ACCESS_TOKEN=your_access_token TWITTER_ACCESS_TOKEN_SECRET=your_access_token_secret
- Start the server:
node index.js
Client Setup
- Navigate to the client directory:
cd client
- Install dependencies:
npm install
- Configure the
.env
file with your Gemini API key:GEMINI_API_KEY=your_gemini_api_key
- Start the client:
node index.js
Usage
- Start the server first, then the client
- When the client connects, you'll see a prompt for input
- Type your question or request
- The AI will respond directly or use one of the tools if needed
Example interactions:
- "What's 25 plus 17?" (Uses the addTwoNumbers tool)
- "Post a tweet that says 'Hello from my MCP project!'" (Uses the createPost tool)
How It Works
- The client connects to the MCP server via SSE (Server-Sent Events)
- The server registers available tools with input schemas using Zod validation
- User queries are sent to Google's Gemini AI model
- If the AI determines a tool should be used, it makes a function call
- The function call is routed through the MCP client to the MCP server
- The server executes the requested tool and returns results
- Results are presented to the user and added to chat history
Technologies Used
- @modelcontextprotocol/sdk - For MCP implementation
- @google/genai - For Gemini AI integration
- Express - Web server framework
- twitter-api-v2 - Twitter API client
- zod - Schema validation