mcp_modem_api_server

mcp_modem_api_server

0

The MCP AI Modem Server is designed for local protocol-based communication and LLM querying, specifically catering to compute-poor environments. It supports industrial protocols and provides security through airgapped operations, making it suitable for local applications needing robust data communication.

MCP AI Modem Server (Local-compute poor)

This project provides an compute poor, locally-executable MCP server that enables protocol-based communication (MQTT, OPC UA, Modbus) and a locally running LLM using Hugging Face transformers.

โœจ Features

  • ๐Ÿ“ก Protocol Gateway: OPC UA, MQTT, Modbus (for industrial data)
  • ๐Ÿง  Local LLM Query Support (no internet)
  • ๐Ÿ”’ Airgapped: No external calls once models are downloaded
  • ๐ŸชŸ Windows-compatible: Works in Python environments with Conda or venv

๐Ÿ“ฆ Requirements

๐Ÿงช Installation (Windows)

git clone https://github.com/YOUR_USERNAME/mcp-modem-ai-server.git
cd mcp-modem-ai-server
python -m venv venv
venv\Scripts\activate
pip install -r requirements.txt

๐Ÿš€ Run the Server

uvicorn main:app --reload

๐Ÿง  Example LLM Query

{
  "protocol": "llm",
  "query": "How to optimize coolant temperature?",
  "context": "Reactor 7, summer operation mode"
}

๐Ÿ” Airgapped Use

After model is downloaded, disconnect from internet and inference will continue to work.

๐Ÿงพ License

MIT License - see LICENSE file.