Scrape websites to raw HTML with Botasaurus and convert to Markdown with markitdown.
Project description
markdown_webscraper
Scrape websites with botasaurus, save raw .html, then convert .html and .pdf to .md with markitdown.
API Reference
Core Classes
markdown_webscraper.WebsiteScraper
The main class for running the scraping process.
Constructor:
WebsiteScraper(config: ScraperConfig, fetcher: PageFetcher | None = None, sleeper: Callable[[float], None] = time.sleep)
config: AScraperConfigobject containing scraping parameters.fetcher: An optional implementation ofPageFetcher. Defaults toBotasaurusFetcher.sleeper: A function to handle time delays. Defaults totime.sleep.
Methods:
run() -> CrawlStats: Starts the scraping process based on the provided configuration. ReturnsCrawlStatscontaining the results.
markdown_webscraper.ScraperConfig
A dataclass representing the scraper configuration.
Attributes:
raw_html_dir (Path): Directory to save raw HTML files.markdown_dir (Path): Directory to save converted Markdown files.wildcard_websites (list[str]): List of root URLs for recursive scraping.individual_websites (list[str]): List of specific URLs to scrape.remove_header_footer (bool): Whether to prune<header>and<footer>tags.markdown_convert (bool): Whether to convert HTML to Markdown.time_delay (float): Delay between requests in seconds.total_timeout (float): Maximum time in seconds for the entire scraping process.
markdown_webscraper.CrawlStats
A dataclass containing statistics from a completed crawl.
Attributes:
pages_fetched (int): Total number of pages requested.html_files_saved (int): Total number of HTML files written to disk.markdown_files_saved (int): Total number of Markdown files written to disk.
Utilities
markdown_webscraper.load_config(config_path: str | Path) -> ScraperConfig
Loads a ScraperConfig from a JSON file.
Usage Example
from pathlib import Path
from markdown_webscraper import WebsiteScraper, load_config
# Load configuration from a JSON file
config = load_config("config.json")
# Initialize and run the scraper
scraper = WebsiteScraper(config=config)
stats = scraper.run()
print(f"Scraped {stats.pages_fetched} pages.")
print(f"Saved {stats.markdown_files_saved} markdown files.")
Local Development
python3 -m venv .venv
. .venv/bin/activate
pip install -r requirements.txt
Run with local script:
python scrape.py --config config.json
Run as installed package CLI:
markdown-webscraper --config config.json
Configuration
The CLI expects a JSON config file:
{
"raw_html_dir": "/home/brosnan/markdown_webscraper/raw_html/",
"markdown_dir": "/home/brosnan/markdown_webscraper/markdown/",
"wildcard_websites": ["https://www.allaboutcircuits.com/textbook", ""],
"individual_websites": ["https://example.com/", "https://www.ti.com/lit/ds/sprs590g/sprs590g.pdf"],
"remove_header_footer": true,
"markdown_convert": true,
"time_delay": 2,
"total_timeout": 180
}
Tests
pytest tests/unit -q
Integration example.com:
RUN_INTEGRATION=1 pytest tests/integration/test_live_scrape.py::test_integration_example_com -m integration -q
Integration allaboutcircuits textbook:
RUN_INTEGRATION=1 RUN_FULL_TEXTBOOK_INTEGRATION=1 pytest tests/integration/test_live_scrape.py::test_integration_allaboutcircuits_textbook_recursive -m integration -q
Build and Publish to PyPI
- Update version in
pyproject.toml. - Build distributions:
python -m pip install --upgrade build twine
python -m build
- Check artifacts:
python -m twine check dist/*
- Upload:
python -m twine upload dist/*
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file markdown_webscraper-0.1.4.tar.gz.
File metadata
- Download URL: markdown_webscraper-0.1.4.tar.gz
- Upload date:
- Size: 7.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3911738947fa6d8b2f2f99144eda5a5e5c5fcac36a7fdbdc555ede89555d59ad
|
|
| MD5 |
137e6b88fc647e5d229eb0f4588f7608
|
|
| BLAKE2b-256 |
4f25ec4a9ec83cd02ef67104a15ba5b6825b9736d9cb92234cd67244b0506054
|
File details
Details for the file markdown_webscraper-0.1.4-py3-none-any.whl.
File metadata
- Download URL: markdown_webscraper-0.1.4-py3-none-any.whl
- Upload date:
- Size: 9.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
91e0df3a9cf4abeccafeacfae393b9b4a0d661324cef2a8292dd0e5c23a728ed
|
|
| MD5 |
af9b21b3f5b67d3a8e4fee1cdedb4cd7
|
|
| BLAKE2b-256 |
ded945a347fdf6231774bb9b011b6dacb1985b7c39ec5ec1c67fd55abfc73345
|