Crawl any website and save every page as organized Markdown files
Project description
crawldown
Crawl any website and save every page as organized Markdown files.
crawldown mirrors a website's URL structure into a local directory of .md files — perfect for archiving documentation, feeding content into RAG pipelines, or reading offline.
crawldown https://docs.example.com --output ./docs-mirror
docs-mirror/
├── index.md
├── getting-started/
│ ├── index.md
│ └── installation.md
└── api/
├── reference.md
└── authentication.md
Installation
pip install crawldown
Or with uv:
uv tool install crawldown
First-time setup — crawldown uses crawl4ai under the hood, which needs a browser installed once to handle JavaScript-rendered pages:
crawl4ai-setup
Quickstart
CLI
# Crawl an entire site
crawldown https://docs.example.com --output ./output
# Limit crawl depth
crawldown https://docs.example.com --output ./output --depth 2
# Add a delay between requests (seconds)
crawldown https://docs.example.com --output ./output --delay 0.5
# Only crawl URLs matching a pattern
crawldown https://docs.example.com --output ./output --include '/docs/*'
# Skip URLs matching a pattern
crawldown https://docs.example.com --output ./output --exclude '/api/*'
# Skip robots.txt enforcement
crawldown https://docs.example.com --output ./output --no-robots
Python API
import asyncio
from crawldown import crawl
asyncio.run(crawl("https://docs.example.com", output_dir="./output"))
With options:
import asyncio
from crawldown import crawl
from crawldown.models import CrawlConfig
config = CrawlConfig(
url="https://docs.example.com",
output_dir="./output",
max_depth=3,
delay=0.5,
respect_robots=True,
include=["/docs/*"], # only crawl these paths (glob)
exclude=["/api/*"], # skip these paths (glob)
)
asyncio.run(crawl(config))
To crawl specific subpages directly, use max_depth=0. Output paths are always anchored to the site root, so files land in the right place regardless of where the crawl starts:
config = CrawlConfig(
url="https://example.com/privacy-policy",
output_dir="./output",
max_depth=0,
# Writes to ./output/privacy-policy/index.md ✓
)
How it works
- Starts at the given URL and fetches the page using crawl4ai (handles JavaScript-rendered pages).
- Extracts all links that stay within the same domain.
- Converts each page to Markdown and saves it at a path matching the URL structure.
- Repeats for every discovered link up to
max_depth(default: unlimited).
Options
| Flag | Default | Description |
|---|---|---|
--output, -o |
./crawldown-output |
Directory to save Markdown files |
--depth, -d |
unlimited | Max link-follow depth |
--delay |
0.0 |
Seconds to wait between requests |
--include |
— | Only crawl paths matching this glob (repeatable) |
--exclude |
— | Skip paths matching this glob (repeatable) |
--no-robots |
off | Ignore robots.txt |
--version |
— | Show version and exit |
Contributing
We welcome contributions of all kinds. See CONTRIBUTING.md for how to get started.
License
MIT — see LICENSE.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file crawldown-0.1.3.tar.gz.
File metadata
- Download URL: crawldown-0.1.3.tar.gz
- Upload date:
- Size: 272.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5a5da0c9d35858c0b765951e0df6594f2cc79cf66fb98ae23b68f69319d27a45
|
|
| MD5 |
f8070d60901ecd4fec18876ba6ebe133
|
|
| BLAKE2b-256 |
02f790dce6ce364928acea45e7881bd1f85520447f30639798d6de80227637fe
|
Provenance
The following attestation bundles were made for crawldown-0.1.3.tar.gz:
Publisher:
release.yml on danilotpnta/crawldown
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
crawldown-0.1.3.tar.gz -
Subject digest:
5a5da0c9d35858c0b765951e0df6594f2cc79cf66fb98ae23b68f69319d27a45 - Sigstore transparency entry: 1397249090
- Sigstore integration time:
-
Permalink:
danilotpnta/crawldown@77d6efe8e1b96ac8149fb32e04d05729f9a90793 -
Branch / Tag:
refs/tags/v0.1.3 - Owner: https://github.com/danilotpnta
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@77d6efe8e1b96ac8149fb32e04d05729f9a90793 -
Trigger Event:
push
-
Statement type:
File details
Details for the file crawldown-0.1.3-py3-none-any.whl.
File metadata
- Download URL: crawldown-0.1.3-py3-none-any.whl
- Upload date:
- Size: 10.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9c168fab8a8a27a8fce85abffcf4c05f25b84f54ef614bcbc8633deea3987414
|
|
| MD5 |
9d5b5889d84eaa6024447f27cf3b4e1e
|
|
| BLAKE2b-256 |
fc1eeb51a0ad490fb6367f129c95b64f08eae2d0f1c7609760ec601249bbf2e6
|
Provenance
The following attestation bundles were made for crawldown-0.1.3-py3-none-any.whl:
Publisher:
release.yml on danilotpnta/crawldown
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
crawldown-0.1.3-py3-none-any.whl -
Subject digest:
9c168fab8a8a27a8fce85abffcf4c05f25b84f54ef614bcbc8633deea3987414 - Sigstore transparency entry: 1397249123
- Sigstore integration time:
-
Permalink:
danilotpnta/crawldown@77d6efe8e1b96ac8149fb32e04d05729f9a90793 -
Branch / Tag:
refs/tags/v0.1.3 - Owner: https://github.com/danilotpnta
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@77d6efe8e1b96ac8149fb32e04d05729f9a90793 -
Trigger Event:
push
-
Statement type: