Skip to main content

A powerful, standalone web scraping toolkit using Playwright and various parsers.

Project description

🕷️ Web Scraper Toolkit & MCP Server

PyPI - Version PyPI - Python Version License

Version: 0.1.4
Status: Production Ready
Expertly Crafted by: Roy Dawson IV

A production-grade, multimodal scraping engine designed for AI Agents. Converts the web into LLM-ready assets (Markdown, JSON, PDF) with robust anti-bot evasion.


🚀 The "Why": AI-First Scraping

In the era of Agentic AI, tools need to be more than just Python scripts. They need to be Token-Efficient, Self-Rectifying, and Structured.

✨ Core Design Goals

  • 🤖 Hyper Model-Friendly: All tools return standardized JSON Envelopes, separating metadata from content to prevent "context pollution."
  • 🔍 Intelligent Sitemap Discovery: summary-first approach prevents context flooding. Detects indices, provides counts, and offers keyword deep-search to find specific pages (e.g. "about", "contact") without reading the whole site.
  • 🛡️ Robust Failover: Smart detection of anti-bot challenges (Cloudflare/403s) automatically triggers a switch from Headless to Visible browser mode to pass checks.
  • 🎯 Precision Control: Use CSS Selectors (selector) and token limits (max_length) to extract exactly what you need, saving tokens and money.
  • 🔄 Batch Efficiency: The explicit batch_scrape tool handles parallel processing found in high-performance agent workflows.
  • ⚡ MCP Native: Exposes a full Model Context Protocol (MCP) server for instant integration with Claude Desktop, Cursor, and other agentic IDEs.
  • 🔒 Privacy & Stealth: Uses playwright-stealth and randomized user agents to mimic human behavior.

📦 Installation

Option A: PyPI (Recommended)

Install directly into your environment or agent container.

pip install web-scraper-toolkit
playwright install

Option B: From Source (Developers)

git clone https://github.com/imyourboyroy/WebScraperToolkit.git
cd WebScraperToolkit
pip install -e .
playwright install

🏗️ Architecture & Best Practices

We support two distinct integration patterns depending on your needs:

Pattern 1: The "Agentic" Way (MCP Server)

Best for: Claude Desktop, Cursor, Custom Agent Swarms.

  • Mechanism: Runs as a standalone process (stdio transport).
  • Benefit: True Sandbox. If the browser crashes, your Agent survives.
  • Usage: Use web-scraper-server.

Pattern 2: The "Pythonic" Way (Library)

Best for: data pipelines, cron jobs, tight integration.

  • Mechanism: Direct import of WebCrawler.
  • Benefit: Simplicity. No subprocess management.
  • Safety: Internal scraping logic still uses ProcessPoolExecutor for isolation!

🔌 MCP Server Integration

This is the primary way to use the toolkit with AI models. The server runs locally and exposes tools via the Model Context Protocol.

Running the Server

Once installed, simply run:

web-scraper-server --verbose

Connecting to Claude Desktop / Cursor

Add the following to your agent configuration:

{
  "mcpServers": {
    "web-scraper": {
      "command": "web-scraper-server",
      "args": ["--verbose"],
      "env": {
        "SCRAPER_WORKERS": "4"
      }
    }
  }
}

🧠 The "JSON Envelope" Standard

To ensure high reliability for Language Models, all tools return data in this strict JSON format:

{
  "status": "success",  // or "error"
  "meta": {
    "url": "https://example.com",
    "timestamp": "2023-10-27T10:00:00",
    "format": "markdown"
  },
  "data": "# Markdown Content of the Website..."  // The actual payload
}

Why? This allows the model to instantly check .status and handle errors gracefully without hallucinating based on error text mixed with content.

🛠️ Available MCP Tools

Tool Description Key Args
scrape_url The Workhorse. Scrapes a single page. url, selector (CSS), max_length
batch_scrape The Time Saver. Parallel processing. urls (List), format
deep_research The Agent. Search + Crawl + Report. query
search_web Standard Search (DDG/Google). query
get_sitemap Sitemap analysis. Deep Search capable. url, keywords, limit
crawl_site Alias for sitemap discovery. url
save_pdf High-fidelity PDF renderer. url, path
configure_scraper Dynamic configuration. headless (bool), user_agent

🔍 Intelligent Sitemap Discovery (Agent Friendly)

Unlike standard tools that dump thousands of URLs, this toolkit is designed for Agent Context Windows.

1. Summary First (Default)

When a Sitemap Index is found, it returns a structural summary with estimated counts, allowing the agent to "peek" before committing tokens.

Example Output:

Found Sitemap Index at https://example.com/sitemap.xml.
contains 20 sub-sitemaps with ~2012 total URLs.

=== Sub-Sitemaps ===
- https://example.com/post-sitemap.xml (~176 URLs)
- https://example.com/page-sitemap.xml (~63 URLs)
- https://example.com/products-sitemap.xml (~1020 URLs)
...

2. Keyword Deep Search

Need to find "About" pages or "Contact" info? Don't crawl the whole site. Use the keywords parameter.

Tool Call: get_sitemap(url="...", keywords="about")

Example Output:

Sitemap Search Results for 'about' in https://example.com/sitemap.xml:
Found 3 matching URLs.

https://example.com/about-us/
https://example.com/company/about-the-team/
https://example.com/about/careers/

💻 CLI Usage (Standalone)

For manual scraping or testing without the MCP server:

# Basic Markdown Extraction (Best for RAG)
web-scraper --url https://example.com --format markdown

# High-Fidelity PDF with Auto-Scroll
web-scraper --url https://example.com --format pdf

# Batch process a list of URLs from a file
web-scraper --input urls.txt --format json --workers 4

# Sitemap to JSON (Site Mapping)
web-scraper --input https://example.com/sitemap.xml --site-tree --format json

🛠️ CLI Reference

Option Shorthand Description Default
--url -u Single target URL to scrape. None
--input -i Input file (.txt, .csv, .json, sitemap .xml) or URL. None
--format -f Output: markdown, pdf, screenshot, json, html. markdown
--headless Run browser in headless mode. (Off/Visible by default for stability). False
--workers -w Number of concurrent workers. Pass max for CPU - 1. 1
--merge -m Merge all outputs into a single file. False
--site-tree Extract URLs from sitemap input without crawling. False
--verbose -v Enable verbose logging. False

🤖 Python API

Integrate the WebCrawler directly into your Python applications.

import asyncio
from web_scraper_toolkit import WebCrawler, ScraperConfig

async def agent_task():
    # 1. Configure
    config = ScraperConfig.load({
        "scraper_settings": {"headless": True}, 
        "workers": 2
    })
    
    # 2. Instantiate
    crawler = WebCrawler(config=config)
    
    # 3. Run
    results = await crawler.run(
        urls=["https://example.com"],
        output_format="markdown",
        output_dir="./memory"
    )
    print(results)

if __name__ == "__main__":
    asyncio.run(agent_task())

⚙️ Server Configuration

You can configure the MCP server via Environment Variables:

Variable Description Default
SCRAPER_WORKERS Number of concurrent browser processes. 1
SCRAPER_VERBOSE Enable debug logs (true/false). false

📜 License

MIT License.


Created with ❤️ by the Intelligence of Roy Dawson IV.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

web_scraper_toolkit-0.1.5.tar.gz (59.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

web_scraper_toolkit-0.1.5-py3-none-any.whl (55.8 kB view details)

Uploaded Python 3

File details

Details for the file web_scraper_toolkit-0.1.5.tar.gz.

File metadata

  • Download URL: web_scraper_toolkit-0.1.5.tar.gz
  • Upload date:
  • Size: 59.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for web_scraper_toolkit-0.1.5.tar.gz
Algorithm Hash digest
SHA256 cc9aadb926691a6ca15e75f78711b19c5e21c9de8fdf876a336bed1a3668d9f1
MD5 015c3d7e6248f59586a418bb82b23b96
BLAKE2b-256 b77907664bea2960ab89f843472e267b883eff7ab16c00c052ebf6ca49b36cb3

See more details on using hashes here.

Provenance

The following attestation bundles were made for web_scraper_toolkit-0.1.5.tar.gz:

Publisher: publish.yml on ImYourBoyRoy/WebScraperToolkit

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file web_scraper_toolkit-0.1.5-py3-none-any.whl.

File metadata

File hashes

Hashes for web_scraper_toolkit-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 64af6b81f11dc20079421c6089890e0b89c1cdce2c76b4ffd3646db1b522ef89
MD5 8a8087964ea0e65aebdf1b35af3493b6
BLAKE2b-256 03f3cc49c24f1bb890bcff04c39d5a5fb5fb51d0644a8045b8d23e6852aab0b9

See more details on using hashes here.

Provenance

The following attestation bundles were made for web_scraper_toolkit-0.1.5-py3-none-any.whl:

Publisher: publish.yml on ImYourBoyRoy/WebScraperToolkit

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page