Skip to main content

Multiagent MCP server for webcrawl content

Project description

mcp-server-webcrawl

Bridge the gap between your web crawl and AI language models using Model Context Protocol (MCP). With mcp-server-webcrawl, your AI client filters and analyzes web content under your direction or autonomously, extracting insights from your web content.

Support for WARC, wget, InterroBot, Katana, and SiteOne crawlers is available out of the gate. The server includes a full-text search interface with boolean support, resource filtering by type, HTTP status, and more. mcp-server-webcrawl provides the LLM a complete menu with which to search your web content.

mcp-server-webcrawl requires Claude Desktop, Python (>=3.10), and can be installed via pip install:

pip install mcp-server-webcrawl

Features:

  • Claude Desktop ready
  • Fulltext search support
  • Filter by type, status, and more
  • Multi-crawler compatible
  • Quick MCP configuration
  • ChatGPT support coming soon

MCP Configuration

From the Claude Desktop menu, navigate to File > Settings > Developer. Click Edit Config to locate the configuration file, open in the editor of your choice and modify the example to reflect your datasrc path.

You can set up more mcp-server-webcrawl connections under mcpServers as needed.

{ 
  "mcpServers": {
    "webcrawl": {
      "command": "mcp-server-webcrawl",
       "args": [varies by crawler, see below]
    }
  }
}

wget (using --mirror)

The datasrc argument should be set to the parent directory of the mirrors.

"args": ["--crawler", "wget", "--datasrc", "/path/to/wget/archives/"]

WARC

The datasrc argument should be set to the parent directory of the WARC files.

"args": ["--crawler", "warc", "--datasrc", "/path/to/warc/archives/"]

InterroBot

The datasrc argument should be set to the direct path to the database.

"args": ["--crawler", "interrobot", "--datasrc", "/path/to/Documents/InterroBot/interrobot.v2.db"]

Katana

The datasrc argument should be set to the parent directory of the text cache files.

"args": ["--crawler", "katana", "--datasrc", "/path/to/katana/archives/"]

SiteOne (using archiving)

The datasrc argument should be set to the parent directory of the archives, archiving must be enabled.

"args": ["--crawler", "katana", "--datasrc", "/path/to/SiteOne/archives/"]

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcp_server_webcrawl-0.7.4.tar.gz (46.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcp_server_webcrawl-0.7.4-py3-none-any.whl (59.9 kB view details)

Uploaded Python 3

File details

Details for the file mcp_server_webcrawl-0.7.4.tar.gz.

File metadata

  • Download URL: mcp_server_webcrawl-0.7.4.tar.gz
  • Upload date:
  • Size: 46.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for mcp_server_webcrawl-0.7.4.tar.gz
Algorithm Hash digest
SHA256 4fa0cc2c2bbc77fd78dcce2614c5cefd8a516c1062a7c1ea6951598e9c615588
MD5 c7617adab62f3266152b9f970ff6c87e
BLAKE2b-256 7c64f82cdfcc9f43972febf7a68e97da09bf2f9300fa1f343775b94a4a4f09f9

See more details on using hashes here.

File details

Details for the file mcp_server_webcrawl-0.7.4-py3-none-any.whl.

File metadata

File hashes

Hashes for mcp_server_webcrawl-0.7.4-py3-none-any.whl
Algorithm Hash digest
SHA256 cb62ce1408328643e9f87b0264713dde4266f46853653847b6edd5fb9d402537
MD5 dcc5e7d746f76743d47acbce0346f49b
BLAKE2b-256 d86596ba454f245fd5b1c9d8be31999315d502dd821dc5762292b1c7ea45bd43

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page