Fast async web scraper — CLI tool, Python library, and AI agent skill.
Project description
scrape
Fast async web scraper — CLI tool, Python library, and AI agent skill.
scrape is a production-ready Python package that scrapes websites, optionally crawls links recursively, and returns structured data suitable for AI consumption.
Installation
pip install scrape
For development:
pip install scrape[dev]
CLI Usage
# Scrape a single page
scrape https://example.com
# Deep crawl with depth limit
scrape https://example.com --deep --max-depth 2
# Save results to a file
scrape https://example.com --deep --save --output data.json
# CSV output
scrape https://example.com --format csv --save --output data.csv
# JSON Lines output (streaming-friendly)
scrape https://example.com --format jsonl --save --output data.jsonl
# Custom rate limiting and concurrency
scrape https://example.com --deep --rate 500 --concurrency 3
# Limit max pages
scrape https://example.com --deep --max-pages 50
# Allow external domain crawling
scrape https://example.com --deep --allow-external
# Use a proxy
scrape https://example.com --proxy http://proxy:8080
# Quiet mode (errors only)
scrape https://example.com --quiet
CLI Options
| Option | Description | Default |
|---|---|---|
<url> |
Target URL to scrape | (required) |
--deep |
Recursively crawl same-domain links | False |
--save |
Save output to a file | False |
--output <file> |
Output file path | output.json |
--format <json|csv|jsonl> |
Output format | json |
--max-depth <n> |
Maximum recursion depth | 1 |
--max-pages <n> |
Maximum pages to scrape | 100 |
--concurrency <n> |
Concurrent requests | 1 |
--rate <ms> |
Delay between requests in ms | 200 |
--allow-external |
Allow crawling external domains | False |
--proxy <url> |
Proxy URL | None |
--verbose / -v |
Enable debug logging | False |
--quiet / -q |
Suppress output | False |
Python Library Usage
from scrape import run_scraper
# Synchronous — works everywhere
results = run_scraper("https://example.com", deep=True, max_depth=2)
for page in results:
print(page["url"], page["title"])
Async Usage
import asyncio
from scrape import async_run_scraper
results = asyncio.run(
async_run_scraper("https://example.com", deep=True, max_depth=2)
)
Batch Scraping
from scrape import scrape_urls
# Scrape multiple URLs concurrently
results = scrape_urls(["https://example.com", "https://example.org"])
AI Agent Skill
scrape is designed to be called programmatically by AI agents (Claude, OpenAI, etc.):
from scrape import run_scraper
# Returns list[dict] with keys: url, title, text, links, images, meta_description, meta_author, depth
data = run_scraper("https://example.com")
Output Format
Each scraped page produces:
{
"url": "https://example.com",
"title": "Example Domain",
"text": "Example Domain This domain is for use in illustrative examples ...",
"links": ["https://www.iana.org/domains/example"],
"images": [],
"meta_description": "",
"meta_author": "",
"depth": 0
}
Architecture
scrape/
__init__.py # Public API & AI skill interface (run_scraper, async_run_scraper, scrape_urls)
__main__.py # python -m scrape support
cli.py # CLI entry point (argparse)
config.py # ScrapeConfig dataclass
exceptions.py # Custom exception hierarchy
core/
fetcher.py # Async HTTP client with retries, backoff, jitter, proxy support
parser.py # HTML parsing (lxml) — title, text, links, images, metadata, CSS, XPath
crawler.py # BFS crawler with concurrency, rate limiting & domain policy
output.py # JSON / CSV / JSONL serialisation & file output
tests/
test_fetcher.py # Fetcher unit tests (mocked HTTP)
test_parser.py # Parser unit tests
test_crawler.py # Crawler unit tests (mocked HTTP)
test_cli.py # CLI integration tests
Development
git clone https://github.com/wscrape/scrape.git
cd scrape
pip install -e ".[dev]"
pytest
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file wscrape_cli-1.5.1.tar.gz.
File metadata
- Download URL: wscrape_cli-1.5.1.tar.gz
- Upload date:
- Size: 14.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2571f3786543a5879f1ccbd7e4ca0778ded01ff5f1bf28e5b06e74c1e1f9f7d4
|
|
| MD5 |
d372255e519d705d50e656f8076e137e
|
|
| BLAKE2b-256 |
6028aad803ca02e6b113d9a33b5ed247f2edbe4c375d681bdc090add639bc10b
|
File details
Details for the file wscrape_cli-1.5.1-py3-none-any.whl.
File metadata
- Download URL: wscrape_cli-1.5.1-py3-none-any.whl
- Upload date:
- Size: 13.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0936f2554c643906794b7fc9b172b4fad9795656f64f771c7e158f19dcc8a45c
|
|
| MD5 |
911ef2db4e8f6f3270d6a0022b518108
|
|
| BLAKE2b-256 |
653cdba36c937292b0ea563cb240cbaec3943e1ad4009e2f02cc158182c6d84b
|