A powerful, standalone web scraping toolkit using Playwright and various parsers.
Project description
🕷️ Web Scraper Toolkit & MCP Server
Version: 0.1.2
Status: Production Ready
Expertly Crafted by: Roy Dawson IV
A production-grade, multimodal scraping engine designed for AI Agents. Converts the web into LLM-ready assets (Markdown, JSON, PDF) with robust anti-bot evasion.
🚀 The "Why": AI-First Scraping
In the era of Agentic AI, tools need to be more than just Python scripts. They need to be Token-Efficient, Self-Rectifying, and Structured.
✨ Core Design Goals
- 🤖 Hyper Model-Friendly: All tools return standardized JSON Envelopes, separating metadata from content to prevent "context pollution."
- 🕷️ Smart Sitemap Discovery: Automatically finds sitemaps via
robots.txt, common paths (e.g./wp-sitemap.xml), and homepage types/links. - 🛡️ Robust Failover: Smart detection of anti-bot challenges (Cloudflare/403s) automatically triggers a switch from Headless to Visible browser mode to pass checks.
- 🎯 Precision Control: Use CSS Selectors (
selector) and token limits (max_length) to extract exactly what you need, saving tokens and money. - 🔄 Batch Efficiency: The explicit
batch_scrapetool handles parallel processing found in high-performance agent workflows. - ⚡ MCP Native: Exposes a full Model Context Protocol (MCP) server for instant integration with Claude Desktop, Cursor, and other agentic IDEs.
- 🔒 Privacy & Stealth: Uses
playwright-stealthand randomized user agents to mimic human behavior.
📦 Installation
Option A: PyPI (Recommended)
Install directly into your environment or agent container.
pip install web-scraper-toolkit
playwright install
Option B: From Source (Developers)
git clone https://github.com/imyourboyroy/WebScraperToolkit.git
cd WebScraperToolkit
pip install -e .
playwright install
🏗️ Architecture & Best Practices
We support two distinct integration patterns depending on your needs:
Pattern 1: The "Agentic" Way (MCP Server)
Best for: Claude Desktop, Cursor, Custom Agent Swarms.
- Mechanism: Runs as a standalone process (stdio transport).
- Benefit: True Sandbox. If the browser crashes, your Agent survives.
- Usage: Use
web-scraper-server.
Pattern 2: The "Pythonic" Way (Library)
Best for: data pipelines, cron jobs, tight integration.
- Mechanism: Direct import of
WebCrawler. - Benefit: Simplicity. No subprocess management.
- Safety: Internal scraping logic still uses
ProcessPoolExecutorfor isolation!
🔌 MCP Server Integration
This is the primary way to use the toolkit with AI models. The server runs locally and exposes tools via the Model Context Protocol.
Running the Server
Once installed, simply run:
web-scraper-server --verbose
Connecting to Claude Desktop / Cursor
Add the following to your agent configuration:
{
"mcpServers": {
"web-scraper": {
"command": "web-scraper-server",
"args": ["--verbose"],
"env": {
"SCRAPER_WORKERS": "4"
}
}
}
}
🧠 The "JSON Envelope" Standard
To ensure high reliability for Language Models, all tools return data in this strict JSON format:
{
"status": "success", // or "error"
"meta": {
"url": "https://example.com",
"timestamp": "2023-10-27T10:00:00",
"format": "markdown"
},
"data": "# Markdown Content of the Website..." // The actual payload
}
Why? This allows the model to instantly check .status and handle errors gracefully without hallucinating based on error text mixed with content.
🛠️ Available MCP Tools
| Tool | Description | Key Args |
|---|---|---|
scrape_url |
The Workhorse. Scrapes a single page. | url, selector (CSS), max_length |
batch_scrape |
The Time Saver. Parallel processing. | urls (List), format |
deep_research |
The Agent. Search + Crawl + Report. | query |
search_web |
Standard Search (DDG/Google). | query |
crawl_site |
Discovery tool for Sitemaps. | url |
save_pdf |
High-fidelity PDF renderer. | url, path |
configure_scraper |
Dynamic configuration. | headless (bool), user_agent |
💻 CLI Usage (Standalone)
For manual scraping or testing without the MCP server:
# Basic Markdown Extraction (Best for RAG)
web-scraper --url https://example.com --format markdown
# High-Fidelity PDF with Auto-Scroll
web-scraper --url https://example.com --format pdf
# Batch process a list of URLs from a file
web-scraper --input urls.txt --format json --workers 4
# Sitemap to JSON (Site Mapping)
web-scraper --input https://example.com/sitemap.xml --site-tree --format json
🛠️ CLI Reference
| Option | Shorthand | Description | Default |
|---|---|---|---|
--url |
-u |
Single target URL to scrape. | None |
--input |
-i |
Input file (.txt, .csv, .json, sitemap .xml) or URL. |
None |
--format |
-f |
Output: markdown, pdf, screenshot, json, html. |
markdown |
--headless |
Run browser in headless mode. (Off/Visible by default for stability). | False |
|
--workers |
-w |
Number of concurrent workers. Pass max for CPU - 1. |
1 |
--merge |
-m |
Merge all outputs into a single file. | False |
--site-tree |
Extract URLs from sitemap input without crawling. | False |
|
--verbose |
-v |
Enable verbose logging. | False |
🤖 Python API
Integrate the WebCrawler directly into your Python applications.
import asyncio
from web_scraper_toolkit import WebCrawler, ScraperConfig
async def agent_task():
# 1. Configure
config = ScraperConfig.load({
"scraper_settings": {"headless": True},
"workers": 2
})
# 2. Instantiate
crawler = WebCrawler(config=config)
# 3. Run
results = await crawler.run(
urls=["https://example.com"],
output_format="markdown",
output_dir="./memory"
)
print(results)
if __name__ == "__main__":
asyncio.run(agent_task())
⚙️ Server Configuration
You can configure the MCP server via Environment Variables:
| Variable | Description | Default |
|---|---|---|
SCRAPER_WORKERS |
Number of concurrent browser processes. | 1 |
SCRAPER_VERBOSE |
Enable debug logs (true/false). |
false |
📜 License
MIT License.
Created with ❤️ by the Intelligence of Roy Dawson IV.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file web_scraper_toolkit-0.1.3.tar.gz.
File metadata
- Download URL: web_scraper_toolkit-0.1.3.tar.gz
- Upload date:
- Size: 56.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
afb102eb9e71d2b98c07e348f519730cf7a43c18e778eeea188ecec51fbcd214
|
|
| MD5 |
eaa62a5a3b34d52ccef37fe56a961959
|
|
| BLAKE2b-256 |
22c99dc431bec7f17a21941a3df9c117b268181f8c6cea0225b3db75b40672fc
|
Provenance
The following attestation bundles were made for web_scraper_toolkit-0.1.3.tar.gz:
Publisher:
publish.yml on ImYourBoyRoy/WebScraperToolkit
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
web_scraper_toolkit-0.1.3.tar.gz -
Subject digest:
afb102eb9e71d2b98c07e348f519730cf7a43c18e778eeea188ecec51fbcd214 - Sigstore transparency entry: 763075280
- Sigstore integration time:
-
Permalink:
ImYourBoyRoy/WebScraperToolkit@239cece63d3ae69efd5833661d3a2a3cac182a8e -
Branch / Tag:
refs/tags/v0.1.3 - Owner: https://github.com/ImYourBoyRoy
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@239cece63d3ae69efd5833661d3a2a3cac182a8e -
Trigger Event:
push
-
Statement type:
File details
Details for the file web_scraper_toolkit-0.1.3-py3-none-any.whl.
File metadata
- Download URL: web_scraper_toolkit-0.1.3-py3-none-any.whl
- Upload date:
- Size: 54.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2c534997eb658f3a104843515a20544a81c8cea21348d8d12463c3a889f9d4f4
|
|
| MD5 |
582b079d7c2500e99284d4c498fa0903
|
|
| BLAKE2b-256 |
ad0b995ceb1b1473d38cfa89608675c746da4c732e0a02fee34a5567136cd7b7
|
Provenance
The following attestation bundles were made for web_scraper_toolkit-0.1.3-py3-none-any.whl:
Publisher:
publish.yml on ImYourBoyRoy/WebScraperToolkit
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
web_scraper_toolkit-0.1.3-py3-none-any.whl -
Subject digest:
2c534997eb658f3a104843515a20544a81c8cea21348d8d12463c3a889f9d4f4 - Sigstore transparency entry: 763075283
- Sigstore integration time:
-
Permalink:
ImYourBoyRoy/WebScraperToolkit@239cece63d3ae69efd5833661d3a2a3cac182a8e -
Branch / Tag:
refs/tags/v0.1.3 - Owner: https://github.com/ImYourBoyRoy
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@239cece63d3ae69efd5833661d3a2a3cac182a8e -
Trigger Event:
push
-
Statement type: