mcp_modem_api_server
0
The MCP AI Modem Server is designed for local protocol-based communication and LLM querying, specifically catering to compute-poor environments. It supports industrial protocols and provides security through airgapped operations, making it suitable for local applications needing robust data communication.
MCP AI Modem Server (Local-compute poor)
This project provides an compute poor, locally-executable MCP server that enables protocol-based communication (MQTT, OPC UA, Modbus) and a locally running LLM using Hugging Face transformers
.
โจ Features
- ๐ก Protocol Gateway: OPC UA, MQTT, Modbus (for industrial data)
- ๐ง Local LLM Query Support (no internet)
- ๐ Airgapped: No external calls once models are downloaded
- ๐ช Windows-compatible: Works in Python environments with Conda or venv
๐ฆ Requirements
- Python 3.8+
- transformers
- uvicorn
- OPC UA, Modbus and MQTT libraries
๐งช Installation (Windows)
git clone https://github.com/YOUR_USERNAME/mcp-modem-ai-server.git
cd mcp-modem-ai-server
python -m venv venv
venv\Scripts\activate
pip install -r requirements.txt
๐ Run the Server
uvicorn main:app --reload
๐ง Example LLM Query
{
"protocol": "llm",
"query": "How to optimize coolant temperature?",
"context": "Reactor 7, summer operation mode"
}
๐ Airgapped Use
After model is downloaded, disconnect from internet and inference will continue to work.
๐งพ License
MIT License - see LICENSE
file.