Skip to main content

Helps AI assistants access text content from bot-protected websites. MCP server that fetches HTML/markdown from sites with anti-automation measures using Scrapling.

Project description

scrapling-fetch-mcp

License PyPI version

Helps AI assistants fetch content from bot-protected websites. Uses Scrapling (patchright + curl-cffi) to bypass anti-automation measures, returning clean HTML or Markdown.

Optimized for low-volume retrieval of documentation and reference materials. Not designed for high-volume scraping or data harvesting.

Requirements: Python 3.10+, uv

Claude Code Skill

The easiest way to use this is as a Claude Code skill. Once installed, Claude will automatically fetch bot-protected URLs when you ask — no manual commands needed.

Install into your project (recommended — only loads in this project's context):

git clone --depth=1 https://github.com/cyberchitta/scrapling-fetch-mcp /tmp/scrapling-fetch-mcp
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch .claude/skills/
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch-setup .claude/skills/
rm -rf /tmp/scrapling-fetch-mcp

Or install for all projects (loads into context everywhere):

git clone --depth=1 https://github.com/cyberchitta/scrapling-fetch-mcp /tmp/scrapling-fetch-mcp
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch ~/.claude/skills/
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch-setup ~/.claude/skills/
rm -rf /tmp/scrapling-fetch-mcp

Then ask Claude to run /s-fetch-setup — it will install the tool and browser binaries (large download), then remove itself. After that, just ask naturally:

"Fetch the docs at https://example.com/api"
"Find all mentions of 'authentication' on that page"
"Get me the installation instructions from their homepage"

Claude Desktop (MCP Server)

If you've already run /s-fetch-setup, the tool is installed — skip to the config below.

Otherwise install first:

uv tool install git+https://github.com/cyberchitta/scrapling-fetch-mcp
uvx --from git+https://github.com/cyberchitta/scrapling-fetch-mcp scrapling install

Note: Browser installation downloads hundreds of MB and must complete before first use. If the server times out initially, wait a few minutes and try again.

Add this to your Claude Desktop MCP settings and restart:

MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json

{
  "mcpServers": {
    "scrapling-fetch": {
      "command": "uvx",
      "args": ["scrapling-fetch-mcp"]
    }
  }
}

How It Works

Two tools, used automatically by Claude:

  • Page fetching — retrieves complete pages with pagination support
  • Pattern extraction — finds content matching a regex

Three protection levels, escalated automatically:

  • basic — fast (1-2s), works for most sites
  • stealth — moderate (3-8s), headless Chromium
  • max-stealth — thorough (10s+), full browser fingerprint

Limitations

  • Text content only (documentation, articles, references)
  • Not for high-volume scraping or sites requiring authentication
  • Performance varies by site complexity and protection level

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapling_fetch_mcp-0.2.2.tar.gz (10.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scrapling_fetch_mcp-0.2.2-py3-none-any.whl (15.5 kB view details)

Uploaded Python 3

File details

Details for the file scrapling_fetch_mcp-0.2.2.tar.gz.

File metadata

  • Download URL: scrapling_fetch_mcp-0.2.2.tar.gz
  • Upload date:
  • Size: 10.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.13

File hashes

Hashes for scrapling_fetch_mcp-0.2.2.tar.gz
Algorithm Hash digest
SHA256 5f6a1201f26f98f6590cb3e27308bc4c72b7315d1cf6579b7d5857729c820523
MD5 cf73400bd7ff4f80e7d9916f5685dca6
BLAKE2b-256 c7d852fac65f57055b4236f235d066fa019eadccd153c00393d038b91c815282

See more details on using hashes here.

File details

Details for the file scrapling_fetch_mcp-0.2.2-py3-none-any.whl.

File metadata

File hashes

Hashes for scrapling_fetch_mcp-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 85eab7180903533d775c351bc84e35e60ce3ba643b80b21829691369713de89b
MD5 8a205d4065d7973e12fdc95edd79fedb
BLAKE2b-256 b9062211b4ee84356c0530233d0d7375a751308823dbc32c27ca6a1631a0f40d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page