MCP server for search and retrieval of web crawler content
Project description
mcp-server-webcrawl
Bridge the gap between your web crawl and AI language models using Model Context Protocol (MCP). With mcp-server-webcrawl, your AI client filters and analyzes web content under your direction or autonomously. The server includes a full-text search interface with boolean support, resource filtering by type, HTTP status, and more.
mcp-server-webcrawl provides the LLM a complete menu with which to search your web content, and works with a variety of web crawlers:
| Crawler/Format | Description | Setup Guide |
|---|---|---|
| WARC | Standard web archive format | Setup Guide |
| wget | Command-line site mirroring tool | Setup Guide |
| InterroBot | Website analysis and SEO crawler | Setup Guide |
| Katana | Security-focused reconnaissance crawler | Setup Guide |
| SiteOne | GUI crawler with offline sites | Setup Guide |
mcp-server-webcrawl is free and open source, and requires Claude Desktop and Python (>=3.10). It is installed on the command line, via pip install:
pip install mcp-server-webcrawl
For step-by-step MCP server setup, refer to the Setup Guides.
Features
- Claude Desktop ready
- Multi-crawler compatible
- Filter by type, status, and more
- Boolean search support
- Support for Markdown and snippets
- Roll your own website knowledgebase
Boolean Search Syntax
The query engine supports field-specific (field: value) searches and complex boolean
expressions. Fulltext is supported as a combination of the url, content, and headers fields.
While the API interface is designed to be consumed by the LLM directly, it can be helpful to familiarize yourself with the search syntax. Searches generated by the LLM are inspectable, but generally collapsed in the UI. If you need to see the query, expand the MCP collapsable.
Example Queries
| Query Example | Description |
|---|---|
| privacy | fulltext single keyword match |
| "privacy policy" | fulltext match exact phrase |
| boundar* | fulltext wildcard matches results starting with boundar (boundary, boundaries) |
| id: 12345 | id field matches a specific resource by ID |
| url: example.com/* | url field matches results with URL containing example.com/ |
| type: html | type field matches for HTML pages only |
| status: 200 | status field matches specific HTTP status codes (equal to 200) |
| status: >=400 | status field matches specific HTTP status code (greater than or equal to 400) |
| content: h1 | content field matches content (HTTP response body, often, but not always HTML) |
| headers: text/xml | headers field matches HTTP response headers |
| privacy AND policy | fulltext matches both |
| privacy OR policy | fulltext matches either |
| policy NOT privacy | fulltext matches policies not containing privacy |
| (login OR signin) AND form | fulltext matches fullext login or signin with form |
| type: html AND status: 200 | fulltext matches only HTML pages with HTTP success |
Field Search Definitions
Field search provides search precision, allowing you to specify which columns of the search index to filter. Rather than searching the entire content, you can restrict your query to specific attributes like URLs, headers, or content body. This approach improves efficiency when looking for specific attributes or patterns within crawl data.
| Field | Description |
|---|---|
| id | database ID |
| url | resource URL |
| type | enumerated list of types (see types table) |
| status | HTTP response codes |
| headers | HTTP response headers |
| content | HTTP body—HTML, CSS, JS, and more |
Content Types
Crawls contain resource types beyond HTML pages. The type: field search
allows filtering by broad content type groups, particularly useful when filtering images without complex extension queries.
For example, you might search for type: html NOT content: login
to find pages without "login," or type: img to analyze image resources. The table below lists all
supported content types in the search system.
| Type | Description |
|---|---|
| html | webpages |
| iframe | iframes |
| img | web images |
| audio | web audio files |
| video | web video files |
| font | web font files |
| style | CSS stylesheets |
| script | JavaScript files |
| rss | RSS syndication feeds |
| text | plain text content |
| PDF files | |
| doc | MS Word documents |
| other | uncategorized |
Extras
The extras parameter provides additional processing options, transforming result data (markdown, snippets), or connecting the LLM to external data (thumbnails). These options can be combined as needed to achieve the desired result format.
| Extra | Description |
|---|---|
| thumbnails | Generates base64 encoded images to be viewed and analyzed by AI models. Enables image description, content analysis, and visual understanding while keeping token output minimal. Works with images, which can be filtered using type: img in queries. SVG is not supported. |
| markdown | Provides the HTML content field as concise markdown, reducing token usage and improving readability for LLMs. Works with HTML, which can be filtered using type: html in queries. |
| snippets | Matches fulltext queries to contextual keyword usage within the content. When used without requesting the content field (or markdown extra), it can provide an efficient means of refining a search without pulling down the complete page contents. Also great for rendering old school hit-highlighted results as a list, like Google search in 1999. Works with HTML, CSS, JS, or any text-based, crawled file. |
Specialty Prompts
A collection of prompts for site analysis using mcp-server-webcrawl. The prompts are cut and paste, used as raw markdown. If you want to shorcut the site selection (one less query), paste the prompt, adding "Can you audit [site name or URL]?"
🔍 SEO Audit (auditseo.md)
Technical search engine optimization analysis. Covers the basics, with options to dive deeper.
🔗 404 Audit (audit404.md)
Systematic broken link detection and pattern analysis. Not only finds issues, but suggests fixes.
⚡ Performance Audit (auditperf.md)
Website speed and optimization analysis. Real talk.
📁 File Type Audit (auditfiles.md)
File organization and asset analysis. Discover the composition of your website.
🌐 Gopher Service (gopher.md)
An old-fashioned search interface inspired by the Gopher clients of yesteryear.
🌐 Boolean Search Self-Test (testsearch.md)
A battery of tests to check for Boolean logical inconsistencies in the search query parser and subsequent fts5 conversion.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mcp_server_webcrawl-0.10.6.tar.gz.
File metadata
- Download URL: mcp_server_webcrawl-0.10.6.tar.gz
- Upload date:
- Size: 67.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8587355ddbf753c8b6659b018fc69d1a2a86be215b4158097582d56ef047cafa
|
|
| MD5 |
ee1e7ccb61da5fa4036946f07133b014
|
|
| BLAKE2b-256 |
6f493c2bb759918569da6c59e84f886fa9176ba41a7ca4806ce8a394002b1d9c
|
File details
Details for the file mcp_server_webcrawl-0.10.6-py3-none-any.whl.
File metadata
- Download URL: mcp_server_webcrawl-0.10.6-py3-none-any.whl
- Upload date:
- Size: 84.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9296185f0be9b405b384e291fc5372f23e152804bd2dbac72ca521213abe94af
|
|
| MD5 |
f1a2a8b8a5079836d2b96ad421807f69
|
|
| BLAKE2b-256 |
c45868f63b39e9e8b4ccc8bcbe82d6cbcf2569530b5ce1f6d5affbf78f682687
|