site-cloner
If you are the rightful owner of site-cloner and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcpreview.com.
Site Cloner MCP Server is designed to help LLMs clone websites by providing tools to fetch, analyze, and download website assets.
Site Cloner MCP Server
This is an MCP (Model Context Protocol) server designed to help LLMs (like Claude) clone websites by providing tools to fetch, analyze, and download website assets.
Features
- Fetch HTML content from any URL
- Extract assets (CSS, JavaScript, images, fonts, etc.) from HTML content
- Download individual assets to a local directory
- Parse CSS files to extract linked assets (fonts, images)
- Create a sitemap of a website
- Analyze page structure and layout
Requirements
- Docker installed on your system
Usage
Running with Docker
-
Build the Docker image:
docker build -t site-cloner-mcp .
-
Run the container:
docker run -i --rm site-cloner-mcp
For persistent storage of downloaded files, you can mount a volume:
docker run -i --rm -v $(pwd)/downloaded_sites:/app/downloaded_site site-cloner-mcp
Connecting to Cursor
To set up this MCP server in Cursor, you have two options:
1. Project-specific configuration
Create a .cursor/mcp.json
file in your project root with the following content:
{
"mcpServers": {
"site-cloner": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"site-cloner-mcp"
]
}
}
}
2. Global configuration
To make the MCP server available globally in Cursor, add the following configuration by going to Cursor Settings
→ MCP
→ Add new Global MCP Server
:
{
"mcpServers": {
"site-cloner": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"site-cloner-mcp"
]
}
}
}
Available Tools
1. fetch_page
Fetches the HTML content of a webpage.
Args:
url: The URL of the webpage to fetch
2. extract_assets
Extracts links to assets from HTML content.
Args:
url: The URL of the webpage (used for resolving relative URLs)
html_content: The HTML content to parse
3. download_asset
Downloads an asset from a URL and saves it to the specified directory.
Args:
url: The URL of the asset to download
output_dir: The directory to save the asset to (default: downloaded_site)
4. parse_css_for_assets
Parses CSS content to extract URLs of referenced assets like fonts and images.
Args:
css_url: The URL of the CSS file (used for resolving relative URLs)
css_content: The CSS content to parse (if None, it will be fetched from css_url)
5. create_site_map
Creates a sitemap of the website starting from the given URL.
Args:
url: The starting URL to crawl
max_depth: Maximum depth to crawl (default: 1)
6. analyze_page_structure
Analyzes the structure of an HTML page and extracts key components.
Args:
html_content: The HTML content to analyze
Example Usage with Claude
- Ask Claude to clone a website: "Please clone the website at example.com"
- Claude will use the available tools to:
- Fetch the HTML content
- Extract assets
- Download necessary files
- Analyze the structure
- Create a local copy of the site
Troubleshooting
Server not showing up in Cursor
- Restart Cursor
- Check your configuration file syntax
- Make sure Docker is installed and running correctly:
docker --version
- Look at Cursor's MCP logs for errors:
Output
→ SelectCursor MCP
from Dropdown
- Try running the server manually to see any errors:
docker run -i --rm site-cloner-mcp
Module Not Found Error
If you encounter a "ModuleNotFoundError: No module named 'site_cloner'" error:
- Check that your package name in pyproject.toml is correct
- Make sure the import statements in your Python files don't include "src." prefix
- Rebuild the Docker image after making changes:
docker build --no-cache -t site-cloner-mcp .
Checking Docker Logs
To check Docker logs for any errors:
docker logs $(docker ps -q --filter ancestor=site-cloner-mcp)
Notes
- The server automatically organizes downloaded assets into subdirectories based on content type (html, css, js, images, fonts, videos, other)
- When cloning a site, be mindful of copyright and terms of service restrictions
- Some websites may block automated requests, in which case you might need to adjust the user agent string in the code
Related MCP Servers
View all browser_automation servers →Fetch
by modelcontextprotocol
A Model Context Protocol server that provides web content fetching capabilities, enabling LLMs to retrieve and process content from web pages.
markdownify-mcp
by zcaceres
Markdownify is a Model Context Protocol (MCP) server that converts various file types and web content to Markdown format.
deepwiki-mcp
by regenrek
This is an unofficial Deepwiki MCP Server that processes Deepwiki URLs, crawls pages, converts them to Markdown, and returns documents or lists by page.
mcp-playwright
by executeautomation
A Model Context Protocol server that provides browser automation capabilities using Playwright.
browser-use-mcp-server
by co-browser
An MCP server that enables AI agents to control web browsers using browser-use.
fetch-mcp
by zcaceres
This MCP server provides functionality to fetch web content in various formats, including HTML, JSON, plain text, and Markdown.
web-eval-agent
by Operative-Sh
operative.sh's MCP Server is a tool for autonomous debugging of web applications directly from your code editor.