Skip to main content

Multiagent MCP server for webcrawl content

Project description

mcp-server-webcrawl

Bridge the gap between your web crawl and AI language models using Model Context Protocol (MCP). With mcp-server-webcrawl, your AI client filters and analyzes web content under your direction or autonomously, extracting insights from your web content.

Support for WARC, wget, InterroBot, Katana, and SiteOne crawlers is available out of the gate. The server includes a full-text search interface with boolean support, resource filtering by type, HTTP status, and more. mcp-server-webcrawl provides the LLM a complete menu with which to search your web content.

mcp-server-webcrawl requires Claude Desktop, Python (>=3.10), and can be installed via pip install:

pip install mcp-server-webcrawl

Features:

  • Claude Desktop ready
  • Fulltext search support
  • Filter by type, status, and more
  • Multi-crawler compatible
  • Quick MCP configuration
  • ChatGPT support coming soon

MCP Configuration

From the Claude Desktop menu, navigate to File > Settings > Developer. Click Edit Config to locate the configuration file, open in the editor of your choice and modify the example to reflect your datasrc path.

You can set up more mcp-server-webcrawl connections under mcpServers as needed.

{ 
  "mcpServers": {
    "webcrawl": {
      "command": "mcp-server-webcrawl",
       "args": [varies by crawler, see below]
    }
  }
}

wget (using --mirror)

The datasrc argument should be set to the parent directory of the mirrors.

"args": ["--crawler", "wget", "--datasrc", "/path/to/wget/archives/"]

WARC

The datasrc argument should be set to the parent directory of the WARC files.

"args": ["--crawler", "warc", "--datasrc", "/path/to/warc/archives/"]

InterroBot

The datasrc argument should be set to the direct path to the database.

"args": ["--crawler", "interrobot", "--datasrc", "/path/to/Documents/InterroBot/interrobot.v2.db"]

Katana

The datasrc argument should be set to the parent directory of the text cache files.

"args": ["--crawler", "katana", "--datasrc", "/path/to/katana/archives/"]

SiteOne (using archiving)

The datasrc argument should be set to the parent directory of the archives, archiving must be enabled.

"args": ["--crawler", "katana", "--datasrc", "/path/to/SiteOne/archives/"]

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcp_server_webcrawl-0.7.2.tar.gz (45.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcp_server_webcrawl-0.7.2-py3-none-any.whl (59.5 kB view details)

Uploaded Python 3

File details

Details for the file mcp_server_webcrawl-0.7.2.tar.gz.

File metadata

  • Download URL: mcp_server_webcrawl-0.7.2.tar.gz
  • Upload date:
  • Size: 45.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for mcp_server_webcrawl-0.7.2.tar.gz
Algorithm Hash digest
SHA256 94b983e6b6da9409fa1393d38fc55610afdde727ca3f4ae705c84772dcc0de61
MD5 554bb2b945f095600ba467c9464e8228
BLAKE2b-256 7c4f5a540d3fad1d2350ad8a870c1e21c4a110c10448f37b8c97a5ac0c050f55

See more details on using hashes here.

File details

Details for the file mcp_server_webcrawl-0.7.2-py3-none-any.whl.

File metadata

File hashes

Hashes for mcp_server_webcrawl-0.7.2-py3-none-any.whl
Algorithm Hash digest
SHA256 0212e5069a6c1188a21f9a9420491b2ffcbde692743f4ed3c2786fcd3056cc76
MD5 f21832f729b84e60baafd1b37d366451
BLAKE2b-256 73ca5ecc76746d05e4db8183126c181f77e19ef0ea7025d04801764d0b078bdd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page