Agent-MCP

Agent-MCP

3.3

Agent-MCP is a local MCP server-client agent designed to facilitate communication between a local MCP server and external platforms using the Model Context Protocol.

Agent-MCP is a local server-client setup that leverages the Model Context Protocol (MCP) to enable seamless interaction between local and external MCP servers. The local MCP server is designed to perform web searches using the Google SerpAPI, while the client can connect to both the local server and external MCP services like those provided by Simthery. The system is built to be flexible, allowing for the integration of different models and APIs, such as OpenAI's API, to enhance its capabilities. The setup is particularly useful for research and planning tasks, as it can handle complex queries and provide detailed insights into trending topics and technologies.

Features

  • Local MCP Server: Utilizes Google SerpAPI for web searches.
  • MCP Client: Connects to both local and external MCP servers.
  • Flexible API Integration: Supports OpenAI API for enhanced functionality.
  • Model Compatibility: Tested with Qwen2.5 and DeepSeek-V3 models.
  • Research and Planning: Includes custom agents for deep research and planning tasks.

Usage with Different Platforms

mcp

python
async with MCPServerStdio(
    params={"command": "python", "args": [LOCAL_SERVER_PATH]}
    ) as server:
    # list tools
    tools = await server.list_tools()
    print(f"Connected to server with tools: {tools}")
    # planning agent
    planning_agent = PlanningAgent(
        name="Planning Agent",
        model=model,
        handoffs=[
            ResearchAgent(
                name="Research Agent",
                model=model,
                mcp_servers=[server],
            )
        ],
    )
    # run
    result_stream = Runner.run_streamed(
        starting_agent=planning_agent,
        input="What's the main focus on the topic of AI in 2025? Give me some trending acedemic papers key points and new technologies overview in 2025",
        max_turns=15,
    )
    async for event in result_stream.stream_events():
        if event.type == "agent_updated_stream_event":
            print(f"Agent updated: {event.new_agent.name}")
        elif event.type == "run_item_stream_event":
            if isinstance(event.item, ToolCallItem):
                print(f"Call Tool: {event.item.raw_item.name} with args: {event.item.raw_item.arguments}")
                await asyncio.sleep(0.05)
            elif isinstance(event.item, ToolCallOutputItem):
                text = json.loads(event.item.output)["text"]
                print(f"Tool call output: {text[:100]} ...")
                await asyncio.sleep(0.05)
            elif isinstance(event.item, MessageOutputItem):
                text = ItemHelpers.text_message_output(event.item)
                if text:
                    print(f"Running step: {text}")
                    await asyncio.sleep(0.05)
        elif event.type == "raw_response_event":
            try:
                if event.data.delta:
                    continue
            except Exception as e:
                print(f"Event: {event.data}")