smallest-ai-mcp
The Smallest AI MCP Server connects the Waves API with MCP-compatible LLMs, focusing on speed and security for voice applications. It allows for seamless voice synthesis and voice cloning workflows.
Smallest AI MCP Server
Production-grade ModelContextProtocol (MCP) server for the Waves Text-to-Speech and Voice Cloning platform.
Fast, portable, and ready for real-world AI voice workflows.
🚀 Overview
Smallest AI MCP Server provides a seamless bridge between the powerful Waves TTS/Voice Cloning API and any MCP-compatible LLM or agent. It is designed for speed, security, and ease of deployment.
✨ Features
- 🎤 List and preview voices — Instantly fetch all available voices from Waves.
- 🗣️ Synthesize speech — Convert text to high-quality WAV audio files.
- 👤 Clone voices — Create instant/professional voice clones.
- 🗂️ Manage clones — List and delete your cloned voices.
All features are implemented as MCP tools, with no placeholders or stubs.
⚡ Quickstart
# 1. Clone the repo
$ git clone https://github.com/Akshay-Sisodia/smallest-ai-mcp.git
$ cd smallest-ai-mcp
# 2. Install dependencies
$ pip install -r requirements.txt
# 3. Configure your API key
$ cp .env.example .env
# Edit .env and add your real WAVES_API_KEY
# 4. Start the server
$ python server.py
🐳 Docker Usage
# Build the Docker image
$ docker build -t smallest-ai-mcp .
# Run the container
$ docker run -p 8000:8000 \
-e WAVES_API_KEY=your_waves_api_key \
smallest-ai-mcp
🛠️ Tech Stack
- Python 3.11+
- Starlette, requests, httpx
- modelcontextprotocol/mcp-sdk
🏗️ Production & Deployment
- Environment: Copy
.env.example
to.env
and add your API key. Never commit secrets to git. - Dependencies: Install with
pip install -r requirements.txt
(Python 3.11+). - Docker: Use the provided Dockerfile for containerization.
- Security: API keys are required at startup and never exposed.
- License: MIT (see ).
🤝 Contributing
Pull requests and issues are welcome! Please open an issue to discuss major changes.
👤 Maintainer
- Akshay Sisodia (GitHub)
📄 License
MIT
Groq MCP Client
A Streamlit application that connects to an MCP (Model Context Protocol) server and uses Groq's LLM API for chat conversations with tool execution capabilities.
Features
- Connect to any MCP server using the official MCP SDK via SSE (Server-Sent Events)
- Asynchronous communication with the MCP server
- Chat interface with streaming responses from Groq
- Tool execution through the MCP server
- Clean and user-friendly UI
Requirements
- Python 3.8+
- Groq API key
- An MCP server that supports SSE (running on HTTP)
- MCP SDK (automatically installed with requirements.txt)
Installation
- Clone this repository
- Install the dependencies:
pip install -r requirements.txt
Usage
- Run the application:
streamlit run groq_mcp_client.py
- In the Streamlit UI:
- Enter your Groq API key in the sidebar
- Enter the URL of your MCP server (default: http://localhost:8000)
- Click "Connect to MCP Server"
- Start chatting!
How it works
- The application starts and connects to the MCP server using the official MCP SDK via SSE
- The MCP server provides a list of available tools
- When you send a message:
- The message is sent to Groq's API
- If Groq decides to use a tool, the tool call is executed through the MCP server
- The tool results are sent back to Groq
- Groq provides a final response
Implementation Details
- Uses the official MCP SDK for communication with MCP servers
- Connects via SSE (Server-Sent Events) for HTTP-based servers
- Implements async/await pattern for efficient server communication
- Maintains compatibility with the Streamlit UI framework
Customization
You can modify the following aspects of the application:
- Change the Groq model by modifying the
model
parameter in theGroqClient.generate_stream
method - Customize the UI by modifying the Streamlit components
- Add additional functionality to the MCP client
License
MIT