Skip to main content

Crawl any website and save every page as organized Markdown files

Project description

crawldown

CI PyPI version License: MIT Python 3.10+

Crawl any website and save every page as organized Markdown files.

crawldown mirrors a website's URL structure into a local directory of .md files — perfect for archiving documentation, feeding content into RAG pipelines, or reading offline.

crawldown https://docs.example.com --output ./docs-mirror
docs-mirror/
├── index.md
├── getting-started/
│   ├── index.md
│   └── installation.md
└── api/
    ├── reference.md
    └── authentication.md

Installation

pip install crawldown

Or with uv:

uv tool install crawldown

First-time setup — crawldown uses crawl4ai under the hood, which needs a browser installed once to handle JavaScript-rendered pages:

crawl4ai-setup

Quickstart

CLI

# Crawl an entire site
crawldown https://docs.example.com --output ./output

# Limit crawl depth
crawldown https://docs.example.com --output ./output --depth 2

# Add a delay between requests (seconds)
crawldown https://docs.example.com --output ./output --delay 0.5

# Only crawl URLs matching a pattern
crawldown https://docs.example.com --output ./output --include '/docs/*'

# Skip URLs matching a pattern
crawldown https://docs.example.com --output ./output --exclude '/api/*'

# Skip robots.txt enforcement
crawldown https://docs.example.com --output ./output --no-robots

Python API

import asyncio
from crawldown import crawl

asyncio.run(crawl("https://docs.example.com", output_dir="./output"))

With options:

import asyncio
from crawldown import crawl
from crawldown.models import CrawlConfig

config = CrawlConfig(
    url="https://docs.example.com",
    output_dir="./output",
    max_depth=3,
    delay=0.5,
    respect_robots=True,
    include=["/docs/*"],   # only crawl these paths (glob)
    exclude=["/api/*"],    # skip these paths (glob)
)

asyncio.run(crawl(config))

How it works

  1. Starts at the given URL and fetches the page using crawl4ai (handles JavaScript-rendered pages).
  2. Extracts all links that stay within the same domain and URL prefix.
  3. Converts each page to Markdown and saves it at a path matching the URL structure.
  4. Repeats for every discovered link up to max_depth (default: unlimited).

Options

Flag Default Description
--output, -o ./crawldown-output Directory to save Markdown files
--depth, -d unlimited Max link-follow depth
--delay 0.0 Seconds to wait between requests
--include Only crawl paths matching this glob (repeatable)
--exclude Skip paths matching this glob (repeatable)
--no-robots off Ignore robots.txt
--version Show version and exit

Contributing

We welcome contributions of all kinds. See CONTRIBUTING.md for how to get started.


License

MIT — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

crawldown-0.1.1.tar.gz (270.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

crawldown-0.1.1-py3-none-any.whl (10.0 kB view details)

Uploaded Python 3

File details

Details for the file crawldown-0.1.1.tar.gz.

File metadata

  • Download URL: crawldown-0.1.1.tar.gz
  • Upload date:
  • Size: 270.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for crawldown-0.1.1.tar.gz
Algorithm Hash digest
SHA256 54e4b6d4be642af1aa0e1cb8c93ce609509d12ffcaa66eedd524209bcdf624f5
MD5 7bc5054aa12d4ead0826422f19b3b319
BLAKE2b-256 a67226438460c69b6248b1c39ed119366c691174d7c48cc1e95a66bcd048deb2

See more details on using hashes here.

Provenance

The following attestation bundles were made for crawldown-0.1.1.tar.gz:

Publisher: release.yml on danilotpnta/crawldown

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file crawldown-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: crawldown-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 10.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for crawldown-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 033eecce5c4a64dd52dc2600550691df97d25000c8b993b93ca7758fde869cc8
MD5 17f6247ceb43ee851e301e59b9236829
BLAKE2b-256 cb573d8822f0dcb12635fe0b2d20f847490ccdf947cd7d1c87040ba7b3c9ccbf

See more details on using hashes here.

Provenance

The following attestation bundles were made for crawldown-0.1.1-py3-none-any.whl:

Publisher: release.yml on danilotpnta/crawldown

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page