Skip to main content

Multiagent MCP server for webcrawl content

Project description

mcp-server-webcrawl

Bridge the gap between your web crawl and AI language models using Model Context Protocol (MCP). With mcp-server-webcrawl, your AI client filters and analyzes web content under your direction or autonomously, extracting insights from your web content.

Support for WARC, wget, InterroBot, Katana, and SiteOne crawlers is available out of the gate. The server includes a full-text search interface with boolean support, resource filtering by type, HTTP status, and more. mcp-server-webcrawl provides the LLM a complete menu with which to search your web content.

mcp-server-webcrawl requires Claude Desktop, Python (>=3.10), and can be installed via pip install:

pip install mcp-server-webcrawl

Features:

  • Claude Desktop ready
  • Fulltext search support
  • Filter by type, status, and more
  • Multi-crawler compatible
  • Quick MCP configuration
  • ChatGPT support coming soon

MCP Configuration

From the Claude Desktop menu, navigate to File > Settings > Developer. Click Edit Config to locate the configuration file, open in the editor of your choice and modify the example to reflect your datasrc path.

You can set up more mcp-server-webcrawl connections under mcpServers as needed.

{ 
  "mcpServers": {
    "webcrawl": {
      "command": "mcp-server-webcrawl",
       "args": [varies by crawler, see below]
    }
  }
}

wget (using --mirror)

The datasrc argument should be set to the parent directory of the mirrors.

"args": ["--crawler", "wget", "--datasrc", "/path/to/wget/archives/"]

WARC

The datasrc argument should be set to the parent directory of the WARC files.

"args": ["--crawler", "warc", "--datasrc", "/path/to/warc/archives/"]

InterroBot

The datasrc argument should be set to the direct path to the database.

"args": ["--crawler", "interrobot", "--datasrc", "/path/to/Documents/InterroBot/interrobot.v2.db"]

Katana

The datasrc argument should be set to the parent directory of the text cache files.

"args": ["--crawler", "katana", "--datasrc", "/path/to/katana/archives/"]

SiteOne (using archiving)

The datasrc argument should be set to the parent directory of the archives, archiving must be enabled.

"args": ["--crawler", "katana", "--datasrc", "/path/to/SiteOne/archives/"]

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcp_server_webcrawl-0.7.3.tar.gz (46.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcp_server_webcrawl-0.7.3-py3-none-any.whl (59.7 kB view details)

Uploaded Python 3

File details

Details for the file mcp_server_webcrawl-0.7.3.tar.gz.

File metadata

  • Download URL: mcp_server_webcrawl-0.7.3.tar.gz
  • Upload date:
  • Size: 46.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for mcp_server_webcrawl-0.7.3.tar.gz
Algorithm Hash digest
SHA256 fdb1cc291a457d7df396ebb47182e49f4e35e2c0fbec06ad5eb3ad8f8fab30ca
MD5 4cb60a50e509f3a546124fcddf94b619
BLAKE2b-256 0fea7106d1de7a73bfc983d01bc9b0f92d3af896999f6db7617306fdd3f99529

See more details on using hashes here.

File details

Details for the file mcp_server_webcrawl-0.7.3-py3-none-any.whl.

File metadata

File hashes

Hashes for mcp_server_webcrawl-0.7.3-py3-none-any.whl
Algorithm Hash digest
SHA256 8948befa198d36e9eb05374f9691d29490c4bc6314738249c9a2835e32e527b7
MD5 5c616112e743ed62338eb564236d3cf1
BLAKE2b-256 36a916e0ae1eafe1d7b6aaacfa5d1a5c9815cbb10cf4adf44306db7b088c6c73

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page