social_mcp
Social MCP is a system focused on automating social media interactions on platforms like Twitter using multi-agent technology and browser automation. It supports content generation with LLMs and integrates tightly with APIs for seamless operation. The project emphasizes session management and robust error handling, ensuring reliable automation.
Social MCP: Multi-Agent Social Media Automation
Overview
Social MCP is a multi-agent system for automating content extraction, tweet generation, posting, and engagement on platforms like Twitter and Bluesky. It uses LLMs for content generation, Playwright for browser automation, and APIs for platform integration.
Architecture
- MCP Server: Hosts tool endpoints for:
- Content extraction
- Tweet generation using LLMs
- Browser automation for Twitter (using Playwright)
- Social media engagement
- Content scheduling
- MCP Client: Orchestrates workflow, runs agents, manages LLM, and coordinates tool calls
- Common: Shared utilities for Google Sheets integration, retry logic, and secrets management
Core Features
-
Twitter Automation
- Persistent session management
- Robust login detection
- Tweet posting with retry logic
- Search and engagement automation
- Hashtag-based content discovery
-
Content Generation
- LLM-powered tweet generation
- Content scheduling
- Multi-platform support
-
Browser Automation
- Persistent session handling
- Robust page state verification
- Automatic recovery from navigation issues
- URL encoding and proper page loading
Directory Structure
social_mcp/
├── mcp_server/
│ ├── tools/
│ │ ├── post_tweets.py # Twitter automation
│ │ ├── generate_tweets.py # LLM tweet generation
│ │ └── engage_posts.py # Social engagement
│ ├── server.py
│ └── config.py
├── mcp_client/
│ ├── agents/
│ ├── llm_orchestrator.py
│ ├── workflow_graph.py
│ └── client.py
├── common/
│ ├── google_sheets.py
│ ├── retry_utils.py
│ └── secrets.py
├── .env
├── requirements.txt
├── README.md
└── setup.sh
Implementation Details
Twitter Automation (post_tweets.py)
-
Session Management
- Persistent browser sessions
- Automatic login detection
- Session recovery
-
Robust Page Handling
- URL encoding for search terms
- Continuous page state verification
- Automatic navigation recovery
- Retry logic for failed operations
-
Engagement Features
- Hashtag-based search
- Tweet liking automation
- Content discovery
Browser Automation Best Practices
-
Page Loading
- Use
domcontentloaded
for initial load - Wait for specific elements
- Verify page state
- Use
-
Error Handling
- Retry logic for failed operations
- Graceful recovery from errors
- Detailed logging
-
Session Management
- Persistent context
- Login state verification
- Automatic session recovery
Setup
-
Clone the repo and enter the directory:
git clone <repository-url> cd social_mcp
-
Create a virtual environment and install dependencies:
python3 -m venv venv source venv/bin/activate pip install -r requirements.txt playwright install
-
Configure your
.env
file with:TWITTER_USERNAME=your_username TWITTER_PASSWORD=your_password PLAYWRIGHT_SESSION_DIR=./playwright_session HEADLESS=false # Set to true for headless operation
-
Set up Google Sheets API and OAuth credentials
Usage
-
Start the MCP server from /social_mcp:
python mcp_server/server.py
-
Run the MCP client from /social_mcp:
python mcp_client/client.py
Adding New Features
- New Tools: Add to
mcp_server/tools/
and register inserver.py
- New Agents: Add to
mcp_client/agents/
and updateworkflow_graph.py
- Browser Automation: Follow the patterns in
post_tweets.py
for robust implementation
Security
- Store all secrets in
.env
- Use OAuth scopes for Google Sheets and Bluesky
- Playwright scripts handle MFA/CAPTCHA gracefully
- Session data stored securely in
playwright_session
directory
Best Practices
-
Browser Automation
- Always verify page state
- Use proper URL encoding
- Implement retry logic
- Handle navigation issues
-
Error Handling
- Log all operations
- Implement graceful recovery
- Use appropriate timeouts
-
Session Management
- Verify login state
- Handle session recovery
- Clean up resources properly
** To do
- Get content from tweet and bsky, use LLM to get response and post it
- Incorporate second twitter account to post and like
- For search and engagement include searches "web3", "nft", 'crypto"
- Timing the workflow starts and runs make it random from .env
- Implement Reddit posting