Skip to main content

Crawl any website and save every page as organized Markdown files

Project description

crawldown

CI PyPI version License: MIT Python 3.10+

Crawl any website and save every page as organized Markdown files.

crawldown mirrors a website's URL structure into a local directory of .md files — perfect for archiving documentation, feeding content into RAG pipelines, or reading offline.

crawldown https://docs.example.com --output ./docs-mirror
docs-mirror/
├── index.md
├── getting-started/
│   ├── index.md
│   └── installation.md
└── api/
    ├── reference.md
    └── authentication.md

Installation

pip install crawldown

Or with uv:

uv tool install crawldown

First-time setup — crawldown uses crawl4ai under the hood, which needs a browser installed once to handle JavaScript-rendered pages:

crawl4ai-setup

Quickstart

CLI

# Crawl an entire site
crawldown https://docs.example.com --output ./output

# Limit crawl depth
crawldown https://docs.example.com --output ./output --depth 2

# Add a delay between requests (seconds)
crawldown https://docs.example.com --output ./output --delay 0.5

# Only crawl URLs matching a pattern
crawldown https://docs.example.com --output ./output --include '/docs/*'

# Skip URLs matching a pattern
crawldown https://docs.example.com --output ./output --exclude '/api/*'

# Skip robots.txt enforcement
crawldown https://docs.example.com --output ./output --no-robots

Python API

import asyncio
from crawldown import crawl

asyncio.run(crawl("https://docs.example.com", output_dir="./output"))

With options:

import asyncio
from crawldown import crawl
from crawldown.models import CrawlConfig

config = CrawlConfig(
    url="https://docs.example.com",
    output_dir="./output",
    max_depth=3,
    delay=0.5,
    respect_robots=True,
    include=["/docs/*"],   # only crawl these paths (glob)
    exclude=["/api/*"],    # skip these paths (glob)
)

asyncio.run(crawl(config))

How it works

  1. Starts at the given URL and fetches the page using crawl4ai (handles JavaScript-rendered pages).
  2. Extracts all links that stay within the same domain.
  3. Converts each page to Markdown and saves it at a path matching the URL structure.
  4. Repeats for every discovered link up to max_depth (default: unlimited).

Options

Flag Default Description
--output, -o ./crawldown-output Directory to save Markdown files
--depth, -d unlimited Max link-follow depth
--delay 0.0 Seconds to wait between requests
--include Only crawl paths matching this glob (repeatable)
--exclude Skip paths matching this glob (repeatable)
--no-robots off Ignore robots.txt
--version Show version and exit

Contributing

We welcome contributions of all kinds. See CONTRIBUTING.md for how to get started.


License

MIT — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

crawldown-0.1.2.tar.gz (271.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

crawldown-0.1.2-py3-none-any.whl (10.0 kB view details)

Uploaded Python 3

File details

Details for the file crawldown-0.1.2.tar.gz.

File metadata

  • Download URL: crawldown-0.1.2.tar.gz
  • Upload date:
  • Size: 271.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for crawldown-0.1.2.tar.gz
Algorithm Hash digest
SHA256 db53fab1ad7586ce09d539c24badb8220016ba2eb64c2bd037e6b9798835260c
MD5 641cc14e06f870ae6db333625b363886
BLAKE2b-256 ca6e09a42edf05da884aba7166e83c932715288a58707e531b4ecb1350ffa0f8

See more details on using hashes here.

Provenance

The following attestation bundles were made for crawldown-0.1.2.tar.gz:

Publisher: release.yml on danilotpnta/crawldown

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file crawldown-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: crawldown-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 10.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for crawldown-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 10df8e36d50e2952e9e5ca947533e817f2d9497775184d4396a5d7eac06b93ab
MD5 195ea74b107b88343185d023672f1e1e
BLAKE2b-256 43eb2458bff1899d80b4eae69378f651d83e6c95d6ac6525ae55154f248149fd

See more details on using hashes here.

Provenance

The following attestation bundles were made for crawldown-0.1.2-py3-none-any.whl:

Publisher: release.yml on danilotpnta/crawldown

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page