Pull documentation from the web and convert to clean markdown
Project description
docpull
Pull documentation from any website and converts it into clean, AI-ready Markdown. Fast, type-safe, secure, and optimized for building knowledge bases or training datasets.
NEW in v1.3.0: Rich structured metadata extraction (Open Graph, JSON-LD) for enhanced AI/RAG integration.
v1.2.0: 15 major features including language filtering, deduplication, auto-indexing, multi-source configuration, and more. Real-world testing shows 58% size reduction with automatic optimization.
Why docpull?
Unlike tools like wget or httrack, docpull extracts only the main content, removing ads, navbars, and clutter. Output is clean Markdown with optional YAML frontmatter—ideal for RAG systems, offline docs, or ML pipelines.
Key Features
Core Features (v1.0+)
- Works on any documentation site
- Smart extraction of main content
- Async + parallel fetching (up to 10× faster)
- Optional JavaScript rendering via Playwright
- Sitemap + link crawling
- Rate limiting, timeouts, content-type checks
- Saves docs in structured Markdown with YAML metadata
- Built-in Stripe profile as reference implementation (custom profiles easily added)
NEW in v1.3.0: Rich Metadata Extraction
- Structured Metadata: Extract Open Graph, JSON-LD, and microdata during fetch
- Enhanced Frontmatter: Adds author, description, keywords, images, publish dates, and more
- AI/RAG Ready: Richer context for embeddings and retrieval systems
- Opt-in Feature: Enabled with
--rich-metadataflag
v1.2.0: Advanced Optimization
- Language Filtering: Auto-detect and filter by language (skip 352+ translation files)
- Deduplication: Remove duplicates with SHA-256 hashing (save 10+ MB on duplicate content)
- Auto-Index Generation: Create navigable INDEX.md with tree/TOC/categories/stats
- Size Limits: Control file and total download size (skip/truncate oversized files)
- Multi-Source Configuration: Configure multiple docs in one YAML file
- Selective Crawling: Include/exclude URL patterns for targeted fetching
- Content Filtering: Remove verbose sections (Examples, Changelog, etc.)
- Format Conversion: Output to Markdown, TOON (compact), JSON, or SQLite
- Smart Naming: 4 naming strategies (full, short, flat, hierarchical)
- Metadata Extraction: Extract titles, URLs, stats to metadata.json
- Update Detection: Only download changed files (checksums, ETags)
- Incremental Mode: Resume interrupted downloads with checkpointing
- Hooks & Plugins: Python plugin system for custom processing
- Git Integration: Auto-commit changes with customizable messages
- Archive Mode: Create tar.gz/zip archives for distribution
Real-world impact: Testing with 1,914 files (31 MB) → 13 MB (58% reduction) with all optimizations enabled.
Quick Start
pip install docpull
docpull --doctor # verify installation
# Basic usage
docpull https://aptos.dev
docpull stripe # use a built-in profile
# NEW: Simple optimization (v1.2.0)
docpull https://code.claude.com/docs --language en --create-index
# NEW: Rich metadata extraction (v1.3.0)
docpull https://docs.anthropic.com --rich-metadata --create-index
# NEW: Advanced optimization (v1.2.0)
docpull https://aptos.dev \
--deduplicate \
--keep-variant mainnet \
--max-file-size 200kb \
--create-index
# NEW: Multi-source configuration (v1.2.0)
docpull --sources-file examples/multi-source-optimized.yaml
JavaScript-heavy sites
pip install docpull[js]
python -m playwright install chromium
docpull https://site.com --js
Python API
from docpull import GenericAsyncFetcher
fetcher = GenericAsyncFetcher(
url_or_profile="https://aptos.dev",
output_dir="./docs",
max_pages=100,
max_concurrent=20,
)
fetcher.fetch()
Common Options
Core Options
--doctor– verify installation and dependencies--max-pages N– limit crawl size--max-depth N– restrict link depth--max-concurrent N– control parallel fetches--js– enable Playwright rendering--output-dir DIR– output directory--rate-limit X– seconds between requests--no-skip-existing– re-download existing files--dry-run– test without downloading
NEW in v1.2.0: Optimization Options
--language LANG– filter by language (e.g.,en)--exclude-languages LANG [LANG ...]– exclude languages--deduplicate– remove duplicate files--keep-variant PATTERN– keep files matching pattern when deduplicating--max-file-size SIZE– max file size (e.g.,200kb,1mb)--max-total-size SIZE– max total download size--include-paths PATTERN [PATTERN ...]– only crawl matching URLs--exclude-paths PATTERN [PATTERN ...]– skip matching URLs--exclude-sections NAME [NAME ...]– remove sections by header name--format {markdown,toon,json,sqlite}– output format--naming-strategy {full,short,flat,hierarchical}– file naming strategy--create-index– generate INDEX.md with navigation--extract-metadata– extract metadata to metadata.json--rich-metadata– extract rich structured metadata (Open Graph, JSON-LD) during fetch--update-only-changed– only download changed files--incremental– enable incremental mode with resume--git-commit– auto-commit changes--git-message MSG– commit message template--archive– create compressed archive--archive-format {tar.gz,tar.bz2,tar.xz,zip}– archive format--sources-file PATH– multi-source configuration file
See docpull --help for complete list of options.
Performance
Async fetching drastically reduces runtime:
| Pages | Sync | Async | Speedup |
|---|---|---|---|
| 50 | ~50s | ~6s | 8× faster |
Higher concurrency yields even better results.
Output Format
Each downloaded page becomes a Markdown file:
---
url: https://stripe.com/docs/payments
fetched: 2025-11-13
---
# Payment Intents
...
With --rich-metadata, the frontmatter includes Open Graph, JSON-LD, and other structured metadata:
---
url: https://stripe.com/docs/payments
fetched: 2025-11-13
title: Accept a payment
description: Learn how to accept payments with the Payment Intents API
author: Stripe
keywords: [payments, api, stripe, checkout]
image: https://stripe.com/img/docs-preview.png
type: article
site_name: Stripe Documentation
---
# Payment Intents
...
Directory layout mirrors the target site's structure.
Configuration File
Simple Configuration (v1.0+)
output_dir: ./docs
rate_limit: 0.5
sources:
- stripe # Built-in profile
- https://docs.example.com # Or any URL
Run with:
docpull --config config.yaml
NEW: Multi-Source Configuration (v1.2.0)
sources:
anthropic:
url: https://docs.anthropic.com
language: en
max_file_size: 200kb
create_index: true
rich_metadata: true # Extract Open Graph, JSON-LD metadata
claude-code:
url: https://code.claude.com/docs
language: en # Skips 352 translation files!
create_index: true
aptos:
url: https://aptos.dev
deduplicate: true
keep_variant: mainnet # Skips 304 duplicates!
max_file_size: 200kb
include_paths:
- "build/guides/*"
output_dir: ./docs
rate_limit: 0.5
git_commit: true
git_message: "Update docs - {date}"
extract_metadata: true
archive: true
Run with:
docpull --sources-file config.yaml
See examples/ directory for more configuration examples.
Custom Profiles
docpull includes a Stripe profile as reference. Create custom profiles for other sites:
from docpull.profiles.base import SiteProfile
MY_PROFILE = SiteProfile(
name="mysite",
domains={"docs.mysite.com"},
include_patterns=["/docs/", "/api/"],
sitemap_url="https://docs.mysite.com/sitemap.xml",
rate_limit=0.5,
)
Want to contribute profiles? Submit a PR with your custom profile! Popular ones may be added to the core or a community profiles repository.
Security
- HTTPS-only
- Blocks private network IPs
- 50MB page size limit
- Timeout controls
- Validates content-type
- Playwright sandboxing
Troubleshooting
- Installation issues: Run
docpull --doctorto diagnose problems - Missing dependencies: See TROUBLESHOOTING.md for common fixes
- Site requires JS: install Playwright +
--js - Slow or rate limited: lower concurrency or raise
--rate-limit - Large sites: set
--max-pages
For detailed troubleshooting, see TROUBLESHOOTING.md.
v1.2.0 Feature Examples
Language Filtering
Automatically detect and filter documentation by language:
# English only (auto-detects /en/, _en_, docs_en_, etc.)
docpull https://code.claude.com/docs --language en --create-index
Impact: Claude Code docs downloaded in 9 languages = 352 unnecessary files for English-only users.
Deduplication
Remove duplicate files based on content hash:
# Keep mainnet version, skip testnet/devnet duplicates
docpull https://aptos.dev --deduplicate --keep-variant mainnet --create-index
Impact: Aptos Move reference docs across 3 environments = 304 duplicate files (~10 MB saved).
Format Conversion
Convert to different formats for various use cases:
# TOON format (40-60% size reduction, optimized for LLMs)
docpull https://docs.anthropic.com --format toon --language en
# SQLite database with full-text search
docpull https://docs.anthropic.com --format sqlite --language en
# Structured JSON
docpull https://docs.anthropic.com --format json --language en
Incremental Updates
Only download changed files:
docpull https://docs.anthropic.com \
--incremental \
--update-only-changed \
--git-commit \
--git-message "Update docs - {date}"
Use case: Regular documentation updates with minimal bandwidth usage.
Complete Optimization Pipeline
Combine all optimizations:
docpull --sources-file examples/multi-source-optimized.yaml
See examples/ directory for comprehensive configuration examples.
Real-world results: Testing with 4 documentation sources (Anthropic, Claude Code, Aptos, Shelby):
- Before: 1,914 files, 31 MB, no navigation
- After: 1,250 files, 13 MB (58% reduction), full indexes generated
- One command instead of 4+ separate commands with manual optimization
What's New in v1.3.0
This release adds rich structured metadata extraction for better AI/RAG integration.
New Feature:
- Rich Metadata Extraction: Extract Open Graph, JSON-LD, microdata, and other structured metadata during fetch
- Adds author, description, keywords, images, publish dates, and more to frontmatter
- Enhances AI/RAG systems with richer context
- Enabled with
--rich-metadataflag orrich_metadata: truein config - Powered by the extruct library
Example enhanced frontmatter:
---
url: https://docs.example.com/guide
fetched: 2025-11-20
title: Getting Started Guide
description: Learn the basics of our platform
author: John Doe
keywords: [tutorial, guide, api]
image: https://docs.example.com/og-image.png
type: article
published_time: 2024-01-15T10:00:00Z
---
Backward Compatible: All existing workflows continue to work unchanged. Rich metadata is opt-in.
What's New in v1.2.0
This release adds 15 major features across 4 phases. See CHANGELOG.md for complete release notes.
Highlights:
- Multi-source YAML configuration
- Language filtering with auto-detection
- SHA-256 based deduplication
- Auto-index generation (tree, TOC, categories, stats)
- 4 output formats (Markdown, TOON, JSON, SQLite)
- Incremental mode with resume capability
- Git integration and archive creation
- Python plugin/hook system
Backward Compatible: All v1.0+ workflows continue to work unchanged.
Links
License
MIT License - see LICENSE file for details
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file docpull-1.3.0.tar.gz.
File metadata
- Download URL: docpull-1.3.0.tar.gz
- Upload date:
- Size: 87.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9b748ff3a54e14d6e528b735c166b32c0c6474d3e871f74671bb19e6deb5a63b
|
|
| MD5 |
bd01798ae71c5ef71e3b7c2bfefb0ca2
|
|
| BLAKE2b-256 |
32a7fe5d3e40fe1d07a5e0714d9f306491654931e87d8600965007c92e40feee
|
Provenance
The following attestation bundles were made for docpull-1.3.0.tar.gz:
Publisher:
publish.yml on raintree-technology/docpull
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
docpull-1.3.0.tar.gz -
Subject digest:
9b748ff3a54e14d6e528b735c166b32c0c6474d3e871f74671bb19e6deb5a63b - Sigstore transparency entry: 710719254
- Sigstore integration time:
-
Permalink:
raintree-technology/docpull@2e3fcc13c4761b22f3a91f66c882aa2e0d15f1c8 -
Branch / Tag:
refs/tags/v1.3.0 - Owner: https://github.com/raintree-technology
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@2e3fcc13c4761b22f3a91f66c882aa2e0d15f1c8 -
Trigger Event:
release
-
Statement type:
File details
Details for the file docpull-1.3.0-py3-none-any.whl.
File metadata
- Download URL: docpull-1.3.0-py3-none-any.whl
- Upload date:
- Size: 83.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3b09b19be818314e3d6b56ea172535526e5eaf2bb594ef5edc543080e54d61c8
|
|
| MD5 |
6c919725879fd74ff8ca660357c061d9
|
|
| BLAKE2b-256 |
74f698781e002863021b91c0387d6a127bf2bfc324437b8b3d7807f2b7a7d957
|
Provenance
The following attestation bundles were made for docpull-1.3.0-py3-none-any.whl:
Publisher:
publish.yml on raintree-technology/docpull
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
docpull-1.3.0-py3-none-any.whl -
Subject digest:
3b09b19be818314e3d6b56ea172535526e5eaf2bb594ef5edc543080e54d61c8 - Sigstore transparency entry: 710719329
- Sigstore integration time:
-
Permalink:
raintree-technology/docpull@2e3fcc13c4761b22f3a91f66c882aa2e0d15f1c8 -
Branch / Tag:
refs/tags/v1.3.0 - Owner: https://github.com/raintree-technology
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@2e3fcc13c4761b22f3a91f66c882aa2e0d15f1c8 -
Trigger Event:
release
-
Statement type: