Skip to main content

Crawl any website and save every page as organized Markdown files

Project description

crawldown

CI PyPI version License: MIT Python 3.10+

Crawl any website and save every page as organized Markdown files.

crawldown mirrors a website's URL structure into a local directory of .md files — perfect for archiving documentation, feeding content into RAG pipelines, or reading offline.

crawldown https://docs.example.com --output ./docs-mirror
docs-mirror/
├── index.md
├── getting-started/
│   ├── index.md
│   └── installation.md
└── api/
    ├── reference.md
    └── authentication.md

Installation

pip install crawldown

Or with uv:

uv tool install crawldown

First-time setup — crawldown uses crawl4ai under the hood, which needs a browser installed once to handle JavaScript-rendered pages:

crawl4ai-setup

Quickstart

CLI

# Crawl an entire site
crawldown https://docs.example.com --output ./output

# Limit crawl depth
crawldown https://docs.example.com --output ./output --depth 2

# Add a delay between requests (seconds)
crawldown https://docs.example.com --output ./output --delay 0.5

# Only crawl URLs matching a pattern
crawldown https://docs.example.com --output ./output --include '/docs/*'

# Skip URLs matching a pattern
crawldown https://docs.example.com --output ./output --exclude '/api/*'

# Skip robots.txt enforcement
crawldown https://docs.example.com --output ./output --no-robots

Python API

import asyncio
from crawldown import crawl

asyncio.run(crawl("https://docs.example.com", output_dir="./output"))

With options:

import asyncio
from crawldown import crawl
from crawldown.models import CrawlConfig

config = CrawlConfig(
    url="https://docs.example.com",
    output_dir="./output",
    max_depth=3,
    delay=0.5,
    respect_robots=True,
    include=["/docs/*"],   # only crawl these paths (glob)
    exclude=["/api/*"],    # skip these paths (glob)
)

asyncio.run(crawl(config))

To crawl specific subpages directly, use max_depth=0. Output paths are always anchored to the site root, so files land in the right place regardless of where the crawl starts:

config = CrawlConfig(
    url="https://example.com/privacy-policy",
    output_dir="./output",
    max_depth=0,
    # Writes to ./output/privacy-policy/index.md  ✓
)

How it works

  1. Starts at the given URL and fetches the page using crawl4ai (handles JavaScript-rendered pages).
  2. Extracts all links that stay within the same domain.
  3. Converts each page to Markdown and saves it at a path matching the URL structure.
  4. Repeats for every discovered link up to max_depth (default: unlimited).

Options

Flag Default Description
--output, -o ./crawldown-output Directory to save Markdown files
--depth, -d unlimited Max link-follow depth
--delay 0.0 Seconds to wait between requests
--include Only crawl paths matching this glob (repeatable)
--exclude Skip paths matching this glob (repeatable)
--no-robots off Ignore robots.txt
--version Show version and exit

Contributing

We welcome contributions of all kinds. See CONTRIBUTING.md for how to get started.


License

MIT — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

crawldown-0.1.3.tar.gz (272.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

crawldown-0.1.3-py3-none-any.whl (10.3 kB view details)

Uploaded Python 3

File details

Details for the file crawldown-0.1.3.tar.gz.

File metadata

  • Download URL: crawldown-0.1.3.tar.gz
  • Upload date:
  • Size: 272.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for crawldown-0.1.3.tar.gz
Algorithm Hash digest
SHA256 5a5da0c9d35858c0b765951e0df6594f2cc79cf66fb98ae23b68f69319d27a45
MD5 f8070d60901ecd4fec18876ba6ebe133
BLAKE2b-256 02f790dce6ce364928acea45e7881bd1f85520447f30639798d6de80227637fe

See more details on using hashes here.

Provenance

The following attestation bundles were made for crawldown-0.1.3.tar.gz:

Publisher: release.yml on danilotpnta/crawldown

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file crawldown-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: crawldown-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 10.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for crawldown-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 9c168fab8a8a27a8fce85abffcf4c05f25b84f54ef614bcbc8633deea3987414
MD5 9d5b5889d84eaa6024447f27cf3b4e1e
BLAKE2b-256 fc1eeb51a0ad490fb6367f129c95b64f08eae2d0f1c7609760ec601249bbf2e6

See more details on using hashes here.

Provenance

The following attestation bundles were made for crawldown-0.1.3-py3-none-any.whl:

Publisher: release.yml on danilotpnta/crawldown

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page