Skip to main content

Scrape websites to raw HTML with Botasaurus and convert to Markdown with markdownify.

Project description

markdown_webscraper

Scrape websites with botasaurus, save raw .html, then convert to .md with markdownify.

API Reference

Core Classes

markdown_webscraper.WebsiteScraper

The main class for running the scraping process.

Constructor: WebsiteScraper(config: ScraperConfig, fetcher: PageFetcher | None = None, sleeper: Callable[[float], None] = time.sleep)

  • config: A ScraperConfig object containing scraping parameters.
  • fetcher: An optional implementation of PageFetcher. Defaults to BotasaurusFetcher.
  • sleeper: A function to handle time delays. Defaults to time.sleep.

Methods:

  • run() -> CrawlStats: Starts the scraping process based on the provided configuration. Returns CrawlStats containing the results.

markdown_webscraper.ScraperConfig

A dataclass representing the scraper configuration.

Attributes:

  • raw_html_dir (Path): Directory to save raw HTML files.
  • markdown_dir (Path): Directory to save converted Markdown files.
  • wildcard_websites (list[str]): List of root URLs for recursive scraping.
  • individual_websites (list[str]): List of specific URLs to scrape.
  • remove_header_footer (bool): Whether to prune <header> and <footer> tags.
  • markdown_convert (bool): Whether to convert HTML to Markdown.
  • time_delay (float): Delay between requests in seconds.
  • total_timeout (float): Maximum time in seconds for the entire scraping process.

markdown_webscraper.CrawlStats

A dataclass containing statistics from a completed crawl.

Attributes:

  • pages_fetched (int): Total number of pages requested.
  • html_files_saved (int): Total number of HTML files written to disk.
  • markdown_files_saved (int): Total number of Markdown files written to disk.

Utilities

markdown_webscraper.load_config(config_path: str | Path) -> ScraperConfig

Loads a ScraperConfig from a JSON file.


Usage Example

from pathlib import Path
from markdown_webscraper import WebsiteScraper, load_config

# Load configuration from a JSON file
config = load_config("config.json")

# Initialize and run the scraper
scraper = WebsiteScraper(config=config)
stats = scraper.run()

print(f"Scraped {stats.pages_fetched} pages.")
print(f"Saved {stats.markdown_files_saved} markdown files.")

Local Development

python3 -m venv .venv
. .venv/bin/activate
pip install -r requirements.txt

Run with local script:

python scrape.py --config config.json

Run as installed package CLI:

markdown-webscraper --config config.json

Configuration

The CLI expects a JSON config file:

{
  "raw_html_dir": "/home/brosnan/markdown_webscraper/raw_html/",
  "markdown_dir": "/home/brosnan/markdown_webscraper/markdown/",
  "wildcard_websites": ["https://www.allaboutcircuits.com/textbook", ""],
  "individual_websites": ["https://example.com/", "https://www.ti.com/lit/ds/sprs590g/sprs590g.pdf"],
  "remove_header_footer": true,
  "markdown_convert": true,
  "time_delay": 2,
  "total_timeout": 180
}

Tests

pytest tests/unit -q

Integration example.com:

RUN_INTEGRATION=1 pytest tests/integration/test_live_scrape.py::test_integration_example_com -m integration -q

Integration allaboutcircuits textbook:

RUN_INTEGRATION=1 RUN_FULL_TEXTBOOK_INTEGRATION=1 pytest tests/integration/test_live_scrape.py::test_integration_allaboutcircuits_textbook_recursive -m integration -q

Build and Publish to PyPI

  1. Update version in pyproject.toml.
  2. Build distributions:
python -m pip install --upgrade build twine
python -m build
  1. Check artifacts:
python -m twine check dist/*
  1. Upload:
python -m twine upload dist/*

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

markdown_webscraper-0.1.1.tar.gz (7.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

markdown_webscraper-0.1.1-py3-none-any.whl (9.0 kB view details)

Uploaded Python 3

File details

Details for the file markdown_webscraper-0.1.1.tar.gz.

File metadata

  • Download URL: markdown_webscraper-0.1.1.tar.gz
  • Upload date:
  • Size: 7.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for markdown_webscraper-0.1.1.tar.gz
Algorithm Hash digest
SHA256 7b57aaa2817c82c4686792039bc3f71127ab1284cee4ce2e2eff1b5ddc88f147
MD5 3be83153fbaa3390fa9acd6aa5304fab
BLAKE2b-256 6ef1eab4772c426d74dfa2458ca0ecd90fb0c991c6591e1906ed4a719b3b9716

See more details on using hashes here.

File details

Details for the file markdown_webscraper-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for markdown_webscraper-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 75955d15c41c08e95e610ac43dcb0753d1c7838e41845df4b43c304d37ff22f0
MD5 2c0063f01452ef6b5807632bcfacc551
BLAKE2b-256 b47498f9a1421d7bfae66dfbe645d613bb1dd90d5c10ff1a494f888d51a6f6d0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page