Skip to main content

Lightweight web scraper with rate limiting and CSS selectors

Project description

philiprehberger-web-scraper

Tests PyPI version Last updated

Lightweight web scraper with rate limiting and CSS selectors.

Installation

pip install philiprehberger-web-scraper

Usage

from philiprehberger_web_scraper import Scraper

scraper = Scraper(rate_limit=2.0, retry_attempts=3)

# Fetch a single page
page = scraper.get("https://example.com")
titles = page.select_all("h2.title")
link = page.select_one("a.next")
all_links = page.links()

# Extract data
for el in page.select_all(".product"):
    print(el.select_one(".name").text)
    print(el.select_one("a").attr("href"))

# Crawl multiple pages
for page in scraper.crawl("https://example.com/blog", max_pages=20):
    for article in page.select_all("article"):
        print(article.select_one("h2").text)

# Export
Scraper.export_csv(data, "output.csv")
Scraper.export_json(data, "output.json")

Features

  • Built-in rate limiting (token bucket)
  • Retry with backoff on 429/5xx errors
  • CSS selector API wrapping BeautifulSoup
  • Crawl mode with same-domain filtering
  • Link and image extraction
  • CSV and JSON export helpers

Options

Scraper(
    rate_limit=2.0,        # max requests/second
    retry_attempts=3,      # retries on failure
    retry_delay=1.0,       # base delay between retries
    timeout=30.0,          # request timeout
    headers={...},         # custom headers
)

API

Function / Class Description
Scraper(rate_limit, retry_attempts, retry_delay, timeout, headers) Web scraper with rate limiting, retry, and CSS selector extraction
Page A fetched web page with select_one(), select_all(), links(), images(), and title/text properties
Element Wrapper around a parsed element with text, html, attr(), select_one(), select_all()

Development

pip install -e .
python -m pytest tests/ -v

Support

If you find this project useful:

Star the repo

🐛 Report issues

💡 Suggest features

❤️ Sponsor development

🌐 All Open Source Projects

💻 GitHub Profile

🔗 LinkedIn Profile

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

philiprehberger_web_scraper-0.1.9.tar.gz (6.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

philiprehberger_web_scraper-0.1.9-py3-none-any.whl (5.9 kB view details)

Uploaded Python 3

File details

Details for the file philiprehberger_web_scraper-0.1.9.tar.gz.

File metadata

File hashes

Hashes for philiprehberger_web_scraper-0.1.9.tar.gz
Algorithm Hash digest
SHA256 8787de602ee21a54ce41e89148aceb1af8737591862723d3e20b1c24013f15d2
MD5 c42efec74f39d3fad893aa07f0016947
BLAKE2b-256 9cb6ae77ec0288ec223b8c9ec7e2f2a95d0375a167c8b2471087b15c2bf83b2b

See more details on using hashes here.

File details

Details for the file philiprehberger_web_scraper-0.1.9-py3-none-any.whl.

File metadata

File hashes

Hashes for philiprehberger_web_scraper-0.1.9-py3-none-any.whl
Algorithm Hash digest
SHA256 d676ee69fef2c9991c130255d3a93c5de371af470e2c701830a174e3954d9d49
MD5 01b7389137ff6221ff7c0d5aab063c90
BLAKE2b-256 6d996c95539e75c6ec68c83f01925f8bd53ee75657c6eaf3b297132f59f4de87

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page