Skip to main content

MCP server for search and retrieval of web crawler content

Project description

mcp-server-webcrawl

Bridge the gap between your web crawl and AI language models using Model Context Protocol (MCP). With mcp-server-webcrawl, your AI client filters and analyzes web content under your direction or autonomously. The server includes a full-text search interface with boolean support, resource filtering by type, HTTP status, and more.

mcp-server-webcrawl provides the LLM a complete menu with which to search your web content, and works with a variety of web crawlers:

mcp-server-webcrawl is free and open source, and requires Claude Desktop and Python (>=3.10). It is installed on the command line, via pip install:

pip install mcp-server-webcrawl

Features

  • Claude Desktop ready
  • Fulltext search support
  • Filter by type, status, and more
  • Multi-crawler compatible
  • Supports advanced/boolean and field searching

MCP Configuration

From the Claude Desktop menu, navigate to File > Settings > Developer. Click Edit Config to locate the configuration file, open in the editor of your choice and modify the example to reflect your datasrc path.

You can set up more mcp-server-webcrawl connections under mcpServers as needed.

{
  "mcpServers": {
    "webcrawl": {
      "command": [varies by OS/env, see below],
       "args": [varies by crawler, see below]
    }
  }
}

For step-by-step setup, refer to the Setup Guides.

Windows vs. macOS

Windows: command set to "mcp-server-webcrawl"

macOS: command set to absolute path, i.e. the value of $ which mcp-server-webcrawl

For example:

"command": "/Users/yourusername/.local/bin/mcp-server-webcrawl",

To find the absolute path of the mcp-server-webcrawl executable on your system:

  1. Open Terminal
  2. Run which mcp-server-webcrawl
  3. Copy the full path returned and use it in your config file

wget (using --mirror)

The datasrc argument should be set to the parent directory of the mirrors.

"args": ["--crawler", "wget", "--datasrc", "/path/to/wget/archives/"]

WARC

The datasrc argument should be set to the parent directory of the WARC files.

"args": ["--crawler", "warc", "--datasrc", "/path/to/warc/archives/"]

InterroBot

The datasrc argument should be set to the direct path to the database.

"args": ["--crawler", "interrobot", "--datasrc", "/path/to/Documents/InterroBot/interrobot.v2.db"]

Katana

The datasrc argument should be set to the directory of root hosts. Katana separates pages and media by hosts, ./archives/example.com/example.com is expected, and appropriate. More complicated sites expand the crawl data into origin host directories.

"args": ["--crawler", "katana", "--datasrc", "/path/to/katana/archives/"]

SiteOne (using Generate offline website)

The datasrc argument should be set to the parent directory of the archives, archiving must be enabled.

"args": ["--crawler", "siteone", "--datasrc", "/path/to/SiteOne/archives/"]

Boolean Search Syntax

The query engine supports field-specific (field: value) searches and complex boolean expressions. Fulltext is supported as a combination of the url, content, and headers fields.

While the API interface is designed to be consumed by the LLM directly, it can be helpful to familiarize yourself with the search syntax. Searches generated by the LLM are inspectable, but generally collapsed in the UI. If you need to see the query, expand the MCP collapsable.

Example Queries

Query Example Description
privacy fulltext single keyword match
"privacy policy" fulltext match exact phrase
boundar* fulltext wildcard matches results starting with boundar (boundary, boundaries)
id: 12345 id field matches a specific resource by ID
url: example.com/* url field matches results with URL containing example.com/
type: html type field matches for HTML pages only
status: 200 status field matches specific HTTP status codes (equal to 200)
status: >=400 status field matches specific HTTP status code (greater than or equal to 400)
content: h1 content field matches content (HTTP response body, often, but not always HTML)
headers: text/xml headers field matches HTTP response headers
privacy AND policy fulltext matches both
privacy OR policy fulltext matches either
policy NOT privacy fulltext matches policies not containing privacy
(login OR signin) AND form fulltext matches fullext login or signin with form
type: html AND status: 200 fulltext matches only HTML pages with HTTP success

Field Search Definitions

Field search provides search precision, allowing you to specify which columns of the search index to filter. Rather than searching the entire content, you can restrict your query to specific attributes like URLs, headers, or content body. This approach improves efficiency when looking for specific attributes or patterns within crawl data.

Field Description
id database ID
url resource URL
type enumerated list of types (see types table)
status HTTP response codes
headers HTTP response headers
content HTTP body—HTML, CSS, JS, and more

Content Types

Crawls contain a multitude of resource types beyond HTML pages. The type: field search allows filtering by broad content type groups, particularly useful when filtering images without complex extension queries. For example, you might search for type: html NOT content: login to find pages without "login," or type: img to analyze image resources. The table below lists all supported content types in the search system.

Type Description
html webpages
iframe iframes
img web images
audio web audio files
video web video files
font web font files
style CSS stylesheets
script JavaScript files
rss RSS syndication feeds
text plain text content
pdf PDF files
doc MS Word documents
other uncategorized

Extras

The extras parameter provides additional processing options for search results, enhancing the output format and capabilities while optimizing token usage. These options can be combined as needed to achieve the desired result format.

Extra Description
thumbnails Generates base64 encoded thumbnails for image resources that can be viewed and analyzed by AI models. Enables image description, content analysis, and visual understanding while keeping token output minimal. Only works for image (img) types, which can be filtered using type: img in queries. SVG is not supported.
markdown Transforms the HTML content field into concise markdown, reducing token usage and improving readability for LLMs. This does not create a separate field but replaces the HTML in the content field with its markdown equivalent. Must be used with the content field in the fields parameter.
snippets Matches fulltext queries to contextual keyword usage within the content. When used without requesting the content field (or markdown extra), it can provide an efficient means of refining a search without pulling down the complete page contents. Also great for rendering old school hit-highlighted results as a list, like Google search in 1999. Works with HTML, CSS, JS, or any text-based, crawled file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcp_server_webcrawl-0.10.0.tar.gz (63.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcp_server_webcrawl-0.10.0-py3-none-any.whl (82.2 kB view details)

Uploaded Python 3

File details

Details for the file mcp_server_webcrawl-0.10.0.tar.gz.

File metadata

  • Download URL: mcp_server_webcrawl-0.10.0.tar.gz
  • Upload date:
  • Size: 63.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for mcp_server_webcrawl-0.10.0.tar.gz
Algorithm Hash digest
SHA256 b9d2152561e01e32de53df306da693878506387f94c60e949ede872f4ca21823
MD5 f4dffcc194a70e847e206a7a349c8b56
BLAKE2b-256 82888b8ce1f106cc562fd8eac44343e5a9af19d3fcb9c7c815af834b449af872

See more details on using hashes here.

File details

Details for the file mcp_server_webcrawl-0.10.0-py3-none-any.whl.

File metadata

File hashes

Hashes for mcp_server_webcrawl-0.10.0-py3-none-any.whl
Algorithm Hash digest
SHA256 54ec00ecdbcd15f3b022500ffbfc31f2bd14140bfacf7d1b3e365e53197f4887
MD5 371067eed7f65a89bc8d4cb3445a7a4d
BLAKE2b-256 d508e8906b618d67208d6010669264468ec82b85e1e9aa48cc9bbbd0bc762a32

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page