my_mcp_server

my_mcp_server

3.5

Model Context Protocol (MCP) is an open-source protocol that standardizes the interaction between large language models (LLMs) and external data sources and tools.

Model Context Protocol (MCP) is a groundbreaking open-source protocol designed to redefine how large language models (LLMs) interact with the external world. MCP provides a standardized method for any LLM to easily connect with various data sources and tools, enabling seamless access and processing of information. It acts as a universal interface for AI applications, similar to a USB-C port, offering a standardized way for AI models to connect with different data sources and tools. MCP supports two transport protocols: stdio (standard input/output) and SSE (server-sent events), with stdio being more commonly used. The protocol includes core functionalities such as resources, prompts, tools, sampling, roots, and transports. This document focuses on developing an MCP server that serves general LLMs, with a particular emphasis on tools.

Features

  • Standardized interaction between LLMs and external tools.
  • Supports stdio and SSE transport protocols.
  • Includes functionalities like resources, prompts, and tools.
  • Facilitates seamless access and processing of information.
  • Acts as a universal interface for AI applications.

MCP Tools

  • web_search: A tool for searching internet content and returning summarized results.

Usage with Different Platforms

mcp

python
import asyncio
from mcp.client.sse import sse_client
from mcp import ClientSession

async def main():
    async with sse_client('http://localhost:9000/sse') as streams:
        async with ClientSession(*streams) as session:
            await session.initialize()

            res = await session.call_tool('web_search', {'query': '杭州今天天气'})
            print(res)

if __name__ == '__main__':
    asyncio.run(main())

cline


{
  "mcpServers": {
    "web-search": {
      "url": "https://mcp-test-whhergsbso.cn-hangzhou.fcapp.run/sse"
    }
  }
}