Skip to main content

Scrape websites to raw HTML with Botasaurus and convert to Markdown with markitdown.

Project description

markdown_webscraper

Scrape websites with botasaurus, save raw .html, then convert .html and .pdf to .md with markitdown.

API Reference

Core Classes

markdown_webscraper.WebsiteScraper

The main class for running the scraping process.

Constructor: WebsiteScraper(config: ScraperConfig, fetcher: PageFetcher | None = None, sleeper: Callable[[float], None] = time.sleep)

  • config: A ScraperConfig object containing scraping parameters.
  • fetcher: An optional implementation of PageFetcher. Defaults to BotasaurusFetcher.
  • sleeper: A function to handle time delays. Defaults to time.sleep.

Methods:

  • run() -> CrawlStats: Starts the scraping process based on the provided configuration. Returns CrawlStats containing the results.

markdown_webscraper.ScraperConfig

A dataclass representing the scraper configuration.

Attributes:

  • raw_html_dir (Path): Directory to save raw HTML files.
  • markdown_dir (Path): Directory to save converted Markdown files.
  • wildcard_websites (list[str]): List of root URLs for recursive scraping.
  • individual_websites (list[str]): List of specific URLs to scrape.
  • remove_header_footer (bool): Whether to prune <header> and <footer> tags.
  • markdown_convert (bool): Whether to convert HTML to Markdown.
  • time_delay (float): Delay between requests in seconds.
  • total_timeout (float): Maximum time in seconds for the entire scraping process.

markdown_webscraper.CrawlStats

A dataclass containing statistics from a completed crawl.

Attributes:

  • pages_fetched (int): Total number of pages requested.
  • html_files_saved (int): Total number of HTML files written to disk.
  • markdown_files_saved (int): Total number of Markdown files written to disk.

Utilities

markdown_webscraper.load_config(config_path: str | Path) -> ScraperConfig

Loads a ScraperConfig from a JSON file.


Usage Example

from pathlib import Path
from markdown_webscraper import WebsiteScraper, load_config

# Load configuration from a JSON file
config = load_config("config.json")

# Initialize and run the scraper
scraper = WebsiteScraper(config=config)
stats = scraper.run()

print(f"Scraped {stats.pages_fetched} pages.")
print(f"Saved {stats.markdown_files_saved} markdown files.")

Local Development

python3 -m venv .venv
. .venv/bin/activate
pip install -r requirements.txt

Run with local script:

python scrape.py --config config.json

Run as installed package CLI:

markdown-webscraper --config config.json

Configuration

The CLI expects a JSON config file:

{
  "raw_html_dir": "/home/brosnan/markdown_webscraper/raw_html/",
  "markdown_dir": "/home/brosnan/markdown_webscraper/markdown/",
  "wildcard_websites": ["https://www.allaboutcircuits.com/textbook", ""],
  "individual_websites": ["https://example.com/", "https://www.ti.com/lit/ds/sprs590g/sprs590g.pdf"],
  "remove_header_footer": true,
  "markdown_convert": true,
  "time_delay": 2,
  "total_timeout": 180
}

Tests

pytest tests/unit -q

Integration example.com:

RUN_INTEGRATION=1 pytest tests/integration/test_live_scrape.py::test_integration_example_com -m integration -q

Integration allaboutcircuits textbook:

RUN_INTEGRATION=1 RUN_FULL_TEXTBOOK_INTEGRATION=1 pytest tests/integration/test_live_scrape.py::test_integration_allaboutcircuits_textbook_recursive -m integration -q

Build and Publish to PyPI

  1. Update version in pyproject.toml.
  2. Build distributions:
python -m pip install --upgrade build twine
python -m build
  1. Check artifacts:
python -m twine check dist/*
  1. Upload:
python -m twine upload dist/*

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

markdown_webscraper-0.1.3.tar.gz (7.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

markdown_webscraper-0.1.3-py3-none-any.whl (9.2 kB view details)

Uploaded Python 3

File details

Details for the file markdown_webscraper-0.1.3.tar.gz.

File metadata

  • Download URL: markdown_webscraper-0.1.3.tar.gz
  • Upload date:
  • Size: 7.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for markdown_webscraper-0.1.3.tar.gz
Algorithm Hash digest
SHA256 86edc31e6e3ad8707ecc9fd3ac5920515b4eec3690bf8ee106ec57c8ff04eaa4
MD5 ba9e2f53e8d9e27b350cf3362cceb839
BLAKE2b-256 8d3b31ab3bdf391273a2664a010b585f88f7530d09cb27d35c17054d7cd9e103

See more details on using hashes here.

File details

Details for the file markdown_webscraper-0.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for markdown_webscraper-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 67361f564b675605b57b905f2a28088fcf913359d36db3fa5f135f163b25e0c6
MD5 d8c680872cd2380deb57bc477cc348da
BLAKE2b-256 de1efac5f49dc3fd2df579d36ac1dd4b27c38ece3e11dba98d81e0f5d7c4bd6c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page