mcp-server-webcrawl
mcp-server-webcrawl is an open-source tool designed to integrate web crawling capabilities with AI language models using the Model Context Protocol. It offers flexible filtering, searching, and compatibility with various web crawlers for efficient web content analysis.
Website | Github | Docs | PyPi
mcp-server-webcrawl
Bridge the gap between your web crawl and AI language models using Model Context Protocol (MCP). With mcp-server-webcrawl, your AI client filters and analyzes web content under your direction or autonomously. The server includes a full-text search interface with boolean support, resource filtering by type, HTTP status, and more.
mcp-server-webcrawl provides the LLM a complete menu with which to search your web content, and works with a variety of web crawlers:
mcp-server-webcrawl is free and open source, and requires Claude Desktop and Python (>=3.10). It is installed on the command line, via pip install:
pip install mcp-server-webcrawl
Features
- Claude Desktop ready
- Multi-crawler compatible
- Filter by type, status, and more
- Boolean search support
- Support for Markdown and snippets
- Roll your own website knowledgebase
MCP Configuration
From the Claude Desktop menu, navigate to File > Settings > Developer. Click Edit Config to locate the configuration file, open in the editor of your choice and modify the example to reflect your datasrc path.
You can set up more mcp-server-webcrawl connections under mcpServers as needed.
{
"mcpServers": {
"webcrawl": {
"command": [varies by OS/env, see below],
"args": [varies by crawler, see below]
}
}
}
For step-by-step setup, refer to the Setup Guides.
Windows vs. macOS
Windows: command set to "mcp-server-webcrawl"
macOS: command set to absolute path, i.e. the value of $ which mcp-server-webcrawl
For example:
"command": "/Users/yourusername/.local/bin/mcp-server-webcrawl",
To find the absolute path of the mcp-server-webcrawl
executable on your system:
- Open Terminal
- Run
which mcp-server-webcrawl
- Copy the full path returned and use it in your config file
wget (using --mirror)
The datasrc argument should be set to the parent directory of the mirrors.
"args": ["--crawler", "wget", "--datasrc", "/path/to/wget/archives/"]
WARC
The datasrc argument should be set to the parent directory of the WARC files.
"args": ["--crawler", "warc", "--datasrc", "/path/to/warc/archives/"]
InterroBot
The datasrc argument should be set to the direct path to the database.
"args": ["--crawler", "interrobot", "--datasrc", "/path/to/Documents/InterroBot/interrobot.v2.db"]
Katana
The datasrc argument should be set to the directory of root hosts. Katana separates pages and media by hosts, ./archives/example.com/example.com is expected, and appropriate. More complicated sites expand the crawl data into origin host directories.
"args": ["--crawler", "katana", "--datasrc", "/path/to/katana/archives/"]
SiteOne (using Generate offline website)
The datasrc argument should be set to the parent directory of the archives, archiving must be enabled.
"args": ["--crawler", "siteone", "--datasrc", "/path/to/SiteOne/archives/"]
Boolean Search Syntax
The query engine supports field-specific (field: value
) searches and complex boolean
expressions. Fulltext is supported as a combination of the url, content, and headers fields.
While the API interface is designed to be consumed by the LLM directly, it can be helpful to familiarize yourself with the search syntax. Searches generated by the LLM are inspectable, but generally collapsed in the UI. If you need to see the query, expand the MCP collapsable.
Example Queries
Query Example | Description |
---|---|
privacy | fulltext single keyword match |
"privacy policy" | fulltext match exact phrase |
boundar* | fulltext wildcard matches results starting with boundar (boundary, boundaries) |
id: 12345 | id field matches a specific resource by ID |
url: example.com/* | url field matches results with URL containing example.com/ |
type: html | type field matches for HTML pages only |
status: 200 | status field matches specific HTTP status codes (equal to 200) |
status: >=400 | status field matches specific HTTP status code (greater than or equal to 400) |
content: h1 | content field matches content (HTTP response body, often, but not always HTML) |
headers: text/xml | headers field matches HTTP response headers |
privacy AND policy | fulltext matches both |
privacy OR policy | fulltext matches either |
policy NOT privacy | fulltext matches policies not containing privacy |
(login OR signin) AND form | fulltext matches fullext login or signin with form |
type: html AND status: 200 | fulltext matches only HTML pages with HTTP success |
Field Search Definitions
Field search provides search precision, allowing you to specify which columns of the search index to filter. Rather than searching the entire content, you can restrict your query to specific attributes like URLs, headers, or content body. This approach improves efficiency when looking for specific attributes or patterns within crawl data.
Field | Description |
---|---|
id | database ID |
url | resource URL |
type | enumerated list of types (see types table) |
status | HTTP response codes |
headers | HTTP response headers |
content | HTTP body—HTML, CSS, JS, and more |
Content Types
Crawls contain a multitude of resource types beyond HTML pages. The type:
field search
allows filtering by broad content type groups, particularly useful when filtering images without complex extension queries.
For example, you might search for type: html NOT content: login
to find pages without "login," or type: img
to analyze image resources. The table below lists all
supported content types in the search system.
Type | Description |
---|---|
html | webpages |
iframe | iframes |
img | web images |
audio | web audio files |
video | web video files |
font | web font files |
style | CSS stylesheets |
script | JavaScript files |
rss | RSS syndication feeds |
text | plain text content |
PDF files | |
doc | MS Word documents |
other | uncategorized |
Extras
The extras
parameter provides additional processing options, transforming result data (markdown, snippets), or connecting the LLM to external data (thumbnails). These options can be combined as needed to achieve the desired result format.
Extra | Description |
---|---|
thumbnails | Generates base64 encoded images to be viewed and analyzed by AI models. Enables image description, content analysis, and visual understanding while keeping token output minimal. Works with images, which can be filtered using type: img in queries. SVG is not supported. |
markdown | Provides the HTML content field as concise markdown, reducing token usage and improving readability for LLMs. Works with HTML, which can be filtered using type: html in queries. |
snippets | Matches fulltext queries to contextual keyword usage within the content. When used without requesting the content field (or markdown extra), it can provide an efficient means of refining a search without pulling down the complete page contents. Also great for rendering old school hit-highlighted results as a list, like Google search in 1999. Works with HTML, CSS, JS, or any text-based, crawled file. |