mcp-waifu-queue

mcp-waifu-queue

1

MCP Waifu Queue is an MCP server implementing a conversational AI character, utilizing Google Gemini API for text generation and Redis for job queuing. It offers an MCP-compliant API and tracks job status, providing a framework for asynchronous processing of requests.

MCP Waifu Queue (Gemini Edition)

This project implements an MCP (Model Context Protocol) server for a conversational AI "waifu" character, leveraging the Google Gemini API via a Redis queue for asynchronous processing.

Features

  • Text generation using the Google Gemini API.
  • Request queuing with Redis.
  • MCP-compliant API with job status tracking.
  • Configurable through environment variables and API key management.

Architecture

  • Main components include main.py for application setup, respond.py for text generation logic, and worker.py for job processing.

Prerequisites

  • Python 3.7+, Redis server, Google Gemini API Key.

Installation

  • Clone the repository, set up a virtual environment, and install dependencies.

Configuration

  • Store the Gemini API key in ~/.api-gemini. Configure additional settings in the .env file.

Running the Service

  • Start the Redis server, run the RQ worker, and then the MCP server.

MCP API

  • Provides generate_text tool and job status retrieval via job://{job_id} resource.

Testing

  • Run tests using pytest, potentially requiring redis and Gemini API call mocking.

Troubleshooting

  • Common solutions include verifying API keys, ensuring necessary services are running, and checking for connectivity issues.

Contributing

  • Follow standard Git flow practices for contributing to the project.

License

  • Licensed under the MIT-0 License.