Skip to main content

Scrape websites to raw HTML with Botasaurus and convert to Markdown with markdownify.

Project description

markdown_webscraper

Scrape websites with botasaurus, save raw .html, then convert to .md with markdownify.

API Reference

Core Classes

markdown_webscraper.WebsiteScraper

The main class for running the scraping process.

Constructor: WebsiteScraper(config: ScraperConfig, fetcher: PageFetcher | None = None, sleeper: Callable[[float], None] = time.sleep)

  • config: A ScraperConfig object containing scraping parameters.
  • fetcher: An optional implementation of PageFetcher. Defaults to BotasaurusFetcher.
  • sleeper: A function to handle time delays. Defaults to time.sleep.

Methods:

  • run() -> CrawlStats: Starts the scraping process based on the provided configuration. Returns CrawlStats containing the results.

markdown_webscraper.ScraperConfig

A dataclass representing the scraper configuration.

Attributes:

  • raw_html_dir (Path): Directory to save raw HTML files.
  • markdown_dir (Path): Directory to save converted Markdown files.
  • wildcard_websites (list[str]): List of root URLs for recursive scraping.
  • individual_websites (list[str]): List of specific URLs to scrape.
  • remove_header_footer (bool): Whether to prune <header> and <footer> tags.
  • markdown_convert (bool): Whether to convert HTML to Markdown.
  • time_delay (float): Delay between requests in seconds.
  • total_timeout (float): Maximum time in seconds for the entire scraping process.

markdown_webscraper.CrawlStats

A dataclass containing statistics from a completed crawl.

Attributes:

  • pages_fetched (int): Total number of pages requested.
  • html_files_saved (int): Total number of HTML files written to disk.
  • markdown_files_saved (int): Total number of Markdown files written to disk.

Utilities

markdown_webscraper.load_config(config_path: str | Path) -> ScraperConfig

Loads a ScraperConfig from a JSON file.


Usage Example

from pathlib import Path
from markdown_webscraper import WebsiteScraper, load_config

# Load configuration from a JSON file
config = load_config("config.json")

# Initialize and run the scraper
scraper = WebsiteScraper(config=config)
stats = scraper.run()

print(f"Scraped {stats.pages_fetched} pages.")
print(f"Saved {stats.markdown_files_saved} markdown files.")

Local Development

python3 -m venv .venv
. .venv/bin/activate
pip install -r requirements.txt

Run with local script:

python scrape.py --config config.json

Run as installed package CLI:

markdown-webscraper --config config.json

Configuration

The CLI expects a JSON config file:

{
  "raw_html_dir": "/home/brosnan/markdown_webscraper/raw_html/",
  "markdown_dir": "/home/brosnan/markdown_webscraper/markdown/",
  "wildcard_websites": ["https://www.allaboutcircuits.com/textbook", ""],
  "individual_websites": ["https://example.com/", "https://www.ti.com/lit/ds/sprs590g/sprs590g.pdf"],
  "remove_header_footer": true,
  "markdown_convert": true,
  "time_delay": 2,
  "total_timeout": 180
}

Tests

pytest tests/unit -q

Integration example.com:

RUN_INTEGRATION=1 pytest tests/integration/test_live_scrape.py::test_integration_example_com -m integration -q

Integration allaboutcircuits textbook:

RUN_INTEGRATION=1 RUN_FULL_TEXTBOOK_INTEGRATION=1 pytest tests/integration/test_live_scrape.py::test_integration_allaboutcircuits_textbook_recursive -m integration -q

Build and Publish to PyPI

  1. Update version in pyproject.toml.
  2. Build distributions:
python -m pip install --upgrade build twine
python -m build
  1. Check artifacts:
python -m twine check dist/*
  1. Upload:
python -m twine upload dist/*

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

markdown_webscraper-0.1.0.tar.gz (7.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

markdown_webscraper-0.1.0-py3-none-any.whl (9.0 kB view details)

Uploaded Python 3

File details

Details for the file markdown_webscraper-0.1.0.tar.gz.

File metadata

  • Download URL: markdown_webscraper-0.1.0.tar.gz
  • Upload date:
  • Size: 7.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for markdown_webscraper-0.1.0.tar.gz
Algorithm Hash digest
SHA256 93ad62df3b15f1c274265ec45f2f64cce0f3d421e3c435d6b7e2a07465cad57b
MD5 9a61bda9412a6f6140cb70410f453868
BLAKE2b-256 8664c0384c46024de9931a962bca404336d855c3496a6d4e8c2adab5c4272237

See more details on using hashes here.

File details

Details for the file markdown_webscraper-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for markdown_webscraper-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6524fae2e03cd86141166a13b173f3558552f1c496f28af8b85e2362b72b41d0
MD5 78ac2f4ef04f291efedff7314166cada
BLAKE2b-256 52ca04c22c307cdb58be626c025aa83e74af7774e93cd740c32fad9d2974a354

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page