python-sequential-thinking-mcp
The Sequential Thinking MCP Server is a Python-based implementation facilitating detailed problem analysis using the Model Context Protocol (MCP). It breaks down problems into steps, allows for revisions and alternative reasoning, making it suitable for integration with AI assistants.
Sequential Thinking MCP Server (Python Implementation)
A Python implementation of the Sequential Thinking MCP server using the official Model Context Protocol (MCP) Python SDK. This server facilitates a detailed, step-by-step thinking process for problem-solving and analysis.
Features
- Break down complex problems into manageable steps
- Revise and refine thoughts as understanding deepens
- Branch into alternative paths of reasoning
- Adjust the total number of thoughts dynamically
- Generate and verify solution hypotheses
Usage
Running Directly
uv --directory "/path/to/sequential-thinking-mcp" run main.py
Development Mode
For development and testing, you can use the MCP CLI tools:
# Install MCP CLI tools
pip install "mcp[cli]"
# Run in development mode
mcp dev "/path/to/sequential-thinking-mcp"
# npx @modelcontextprotocol/inspector
npx @modelcontextprotocol/inspector uv --diectory "/path/to/sequential-thinking-mcp" run main.py
Integration
mcp install "\path\to\sequential-thinking-mcp\server.py"
{
"mcpServers": {
"sequential-thinking": {
"command": "uv",
"args": [
"--directory",
"/path/to/sequential-thinking-mcp",
"run",
"main.py"
]
}
}
}
Sequential Thinking Tool
The server provides a tool called sequential_thinking
with the following parameters:
thought
(string): The current thinking stepthoughtNumber
(integer): Current thought numbertotalThoughts
(integer): Estimated total thoughts needednextThoughtNeeded
(boolean): Whether another thought step is neededisRevision
(boolean, optional): Whether this revises previous thinkingrevisesThought
(integer, optional): Which thought is being reconsideredbranchFromThought
(integer, optional): Branching point thought numberbranchId
(string, optional): Branch identifierneedsMoreThoughts
(boolean, optional): If more thoughts are needed
Resources
The server provides the following resources for accessing thought data:
thoughts://history
: Get the complete thought historythoughts://branches/{branch_id}
: Get thoughts for a specific branchthoughts://summary
: Get a summary of all thoughts and branches
Prompts
thinking_process_guide
: Guide for using the sequential thinking process
Example Usage
# First thought
sequential_thinking(
thought="First, we need to understand the problem requirements.",
thoughtNumber=1,
totalThoughts=5,
nextThoughtNeeded=True
)
# Second thought
sequential_thinking(
thought="Now, let's analyze the key constraints.",
thoughtNumber=2,
totalThoughts=5,
nextThoughtNeeded=True
)
# Revise a thought
sequential_thinking(
thought="Actually, we need to clarify the problem requirements first.",
thoughtNumber=1,
totalThoughts=5,
nextThoughtNeeded=True,
isRevision=True,
revisesThought=1
)
# Branch from thought 2
sequential_thinking(
thought="Let's explore an alternative approach.",
thoughtNumber=3,
totalThoughts=5,
nextThoughtNeeded=True,
branchFromThought=2,
branchId="alternative-approach"
)
Integration with Claude or Other AI Assistants
To use this server with Claude or other AI assistants that support MCP:
- Install the MCP server in Claude Desktop using the MCP CLI
- The AI can then use the sequential_thinking tool to break down complex problems
About Model Context Protocol (MCP)
The Model Context Protocol (MCP) is a standardized way for applications to provide context and tools to LLMs. It allows:
- Resources: Providing contextual data to the LLM
- Tools: Exposing functionality for the LLM to take actions
- Prompts: Defining reusable templates for LLM interactions
For more information, visit modelcontextprotocol.io
License
MIT