Skip to main content

Python SDK for CRW web scraper — scrape, crawl, and map any website from Python

Project description

crw

Python SDK for CRW — the open-source web scraper built for AI agents.

Install

# npm (zero install):
npx crw-mcp

# Python:
pip install crw

# Direct binary (no package manager):
curl -fsSL https://github.com/us/crw/releases/latest/download/crw-mcp-darwin-arm64.tar.gz | tar xz
# Replace darwin-arm64 with your platform: darwin-x64, linux-x64, linux-arm64, win32-x64, win32-arm64

# Cargo:
cargo install crw-mcp

# Docker:
docker run -i ghcr.io/us/crw crw-mcp

CLI Usage

After installing, you can use crw-mcp as an MCP server for any AI coding agent:

# Start the MCP stdio server
crw-mcp

# Add to Claude Code
claude mcp add crw -- npx crw-mcp

MCP client config (works with Cursor, Windsurf, Cline, Claude Desktop, etc.):

{
  "mcpServers": {
    "crw": {
      "command": "npx",
      "args": ["crw-mcp"]
    }
  }
}

SDK Usage

from crw import CrwClient

# Zero-config (downloads crw-mcp binary automatically):
client = CrwClient()
result = client.scrape("https://example.com")
print(result["markdown"])

# Or connect to a remote server:
client = CrwClient(api_url="https://fastcrw.com/api", api_key="fc-...")

# Scrape with options:
result = client.scrape("https://example.com", formats=["markdown", "links"])
print(result["markdown"])
print(result["links"])

# Crawl a site:
job = client.crawl("https://example.com", max_depth=2, max_pages=10)
print(job["id"])

# Map all URLs on a site:
urls = client.map("https://example.com")
print(urls)

Search (Cloud Only)

Search requires a cloud API connection — it's not available in subprocess mode.

from crw import CrwClient

client = CrwClient(api_url="https://fastcrw.com/api", api_key="YOUR_KEY")

# Basic search
results = client.search("web scraping tools 2026")

# Search with options
results = client.search(
    "AI news",
    limit=10,
    sources=["web", "news"],
    tbs="qdr:w",
)

# Search + scrape content
results = client.search(
    "python tutorials",
    scrape_options={"formats": ["markdown"]},
)

Note: Search is a cloud-only feature. Calling search() without api_url raises CrwError.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

crw-0.3.0.tar.gz (6.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

crw-0.3.0-py3-none-any.whl (8.7 kB view details)

Uploaded Python 3

File details

Details for the file crw-0.3.0.tar.gz.

File metadata

  • Download URL: crw-0.3.0.tar.gz
  • Upload date:
  • Size: 6.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for crw-0.3.0.tar.gz
Algorithm Hash digest
SHA256 b229cbc44479f15177fb1c294823b2a1c995b8fbd2b6157c060ec0c888ab5f86
MD5 bbb0fd327f6133a152934aca1f0f9e2c
BLAKE2b-256 3bff9f19b21ac9d0189dcbb498a78f782e0b182123b702af9e12679a381cb5a5

See more details on using hashes here.

File details

Details for the file crw-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: crw-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 8.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for crw-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 15e3bcd56891782b5e5da6c78a144a497cd98fd606630971f1e83482c9fce41a
MD5 b5e88f5fceac5f4c5f81232d787cfd1d
BLAKE2b-256 d8412d58e22ca64ff213f17c1d6f6ec2c2f28840e48f26accae22cece453afea

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page