Skip to main content

MCP server for search and retrieval of web crawler content

Project description

mcp-server-webcrawl

Bridge the gap between your web crawl and AI language models using Model Context Protocol (MCP). With mcp-server-webcrawl, your AI client filters and analyzes web content under your direction or autonomously. The server includes a full-text search interface with boolean support, resource filtering by type, HTTP status, and more.

mcp-server-webcrawl provides the LLM a complete menu with which to search your web content, and works with a variety of web crawlers:

mcp-server-webcrawl is free and open source, and requires Claude Desktop, Python (>=3.10). It is installed on the command line, via pip install:

pip install mcp-server-webcrawl

Features

  • Claude Desktop ready
  • Fulltext search support
  • Filter by type, status, and more
  • Multi-crawler compatible
  • Supports advanced/boolean and field searching

MCP Configuration

From the Claude Desktop menu, navigate to File > Settings > Developer. Click Edit Config to locate the configuration file, open in the editor of your choice and modify the example to reflect your datasrc path.

You can set up more mcp-server-webcrawl connections under mcpServers as needed.

{
  "mcpServers": {
    "webcrawl": {
      "command": [varies by OS/env, see below],
       "args": [varies by crawler, see below]
    }
  }
}

For step-by-step setup, refer to the Setup Guides.

Windows vs. macOS

On Windows with Python installed on path, the command should simply be mcp-server-webcrawl.

On macOS, you must use the absolute path to the mcp-server-webcrawl executable in the command field, rather than just the command name.

For example:

"command": "/Users/yourusername/.local/bin/mcp-server-webcrawl",

To find the absolute path of the mcp-server-webcrawl executable on your system:

  1. Open Terminal
  2. Run which mcp-server-webcrawl
  3. Copy the full path returned and use it in your config file

wget (using --mirror)

The datasrc argument should be set to the parent directory of the mirrors.

"args": ["--crawler", "wget", "--datasrc", "/path/to/wget/archives/"]

WARC

The datasrc argument should be set to the parent directory of the WARC files.

"args": ["--crawler", "warc", "--datasrc", "/path/to/warc/archives/"]

InterroBot

The datasrc argument should be set to the direct path to the database.

"args": ["--crawler", "interrobot", "--datasrc", "/path/to/Documents/InterroBot/interrobot.v2.db"]

Katana

The datasrc argument should be set to the parent directory of the text cache files.

"args": ["--crawler", "katana", "--datasrc", "/path/to/katana/archives/"]

SiteOne (using archiving)

The datasrc argument should be set to the parent directory of the archives, archiving must be enabled.

"args": ["--crawler", "siteone", "--datasrc", "/path/to/SiteOne/archives/"]

Boolean Search

The query engine supports field-specific (field: value) searches and complex boolean expressions. Fulltext is supported as a combination of the url, content, and headers fields.

While the API interface is designed to be consumed by the LLM directly, it can be helpful to familiarize yourself with the search syntax. Searches generated by the LLM are inspectable, but generally collapsed in the UI. If you need to see the query, expand the MCP collapsable.

Example Queries

Query Description
privacy single fulltext search
"privacy policy" fulltext match exact phrase
privacy* matches wildcard fulltext results starting with "privacy"
id: 12345 matches a specific resource by ID
url: example.com/* matches results with URL containing example.com
type: html HTML pages only
status: 200 matches specific HTTP status code (equal)
status: >=400 matches specific HTTP status code (greater than or equal to)
content: javascript find javascript in HTTP body (often, but not always HTML)
headers: application/json match HTTP response headers
privacy AND policy match both as fulltext search
privacy OR policy match either as fulltext search
policy NOT privacy match fullext policies not containing privacy
(login OR signin) AND form match fullext login or signin with form
type: html AND status: 200 match only HTML pages with HTTP success

Field Search Definitions

Field Description
id database ID
url resource URL
type enumerated list of types (see types table)
status HTTP response code of result
headers HTTP response headers
content HTTP body—HTML, CSS, JS, and more

Content Types

Type Description
html web page
iframe embedded iframe
img web image formats
audio audio files
video video files
font web font files
style CSS stylesheets
script JavaScript files
rss RSS syndication feed
text plain text content
pdf PDF files
doc MS Word documents
other uncategorized

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcp_server_webcrawl-0.9.0.tar.gz (59.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcp_server_webcrawl-0.9.0-py3-none-any.whl (78.1 kB view details)

Uploaded Python 3

File details

Details for the file mcp_server_webcrawl-0.9.0.tar.gz.

File metadata

  • Download URL: mcp_server_webcrawl-0.9.0.tar.gz
  • Upload date:
  • Size: 59.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for mcp_server_webcrawl-0.9.0.tar.gz
Algorithm Hash digest
SHA256 fca0322acb6df7f70f662f694302e36d0be505de37e8fa3aadb5a434b3b30bd6
MD5 23a592c896992af5deb6ff860869f738
BLAKE2b-256 cab17fb4f23517b2895b761c5570af6546284f3807a52d9f9c966a8fc48540a2

See more details on using hashes here.

File details

Details for the file mcp_server_webcrawl-0.9.0-py3-none-any.whl.

File metadata

File hashes

Hashes for mcp_server_webcrawl-0.9.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ff16bf0b62af68a388a0b3fa0f038d780cee6e6111dccecc9c73baa01eef308f
MD5 ace226e1ebae9e9158c349513005bfd6
BLAKE2b-256 d79eb3c83894ff01e320c4053c97297a232e28b0c68d75e87f875d0c2810d32a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page