Skip to main content

Lightweight web scraper with rate limiting and CSS selectors

Project description

philiprehberger-web-scraper

Tests PyPI version License

Lightweight web scraper with rate limiting and CSS selectors.

Installation

pip install philiprehberger-web-scraper

Usage

from philiprehberger_web_scraper import Scraper

scraper = Scraper(rate_limit=2.0, retry_attempts=3)

# Fetch a single page
page = scraper.get("https://example.com")
titles = page.select_all("h2.title")
link = page.select_one("a.next")
all_links = page.links()

# Extract data
for el in page.select_all(".product"):
    print(el.select_one(".name").text)
    print(el.select_one("a").attr("href"))

# Crawl multiple pages
for page in scraper.crawl("https://example.com/blog", max_pages=20):
    for article in page.select_all("article"):
        print(article.select_one("h2").text)

# Export
Scraper.export_csv(data, "output.csv")
Scraper.export_json(data, "output.json")

Features

  • Built-in rate limiting (token bucket)
  • Retry with backoff on 429/5xx errors
  • CSS selector API wrapping BeautifulSoup
  • Crawl mode with same-domain filtering
  • Link and image extraction
  • CSV and JSON export helpers

Options

Scraper(
    rate_limit=2.0,        # max requests/second
    retry_attempts=3,      # retries on failure
    retry_delay=1.0,       # base delay between retries
    timeout=30.0,          # request timeout
    headers={...},         # custom headers
)

API

Function / Class Description
Scraper(rate_limit, retry_attempts, retry_delay, timeout, headers) Web scraper with rate limiting, retry, and CSS selector extraction
Page A fetched web page with select_one(), select_all(), links(), images(), and title/text properties
Element Wrapper around a parsed element with text, html, attr(), select_one(), select_all()

Development

pip install -e .
python -m pytest tests/ -v

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

philiprehberger_web_scraper-0.1.8.tar.gz (5.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

philiprehberger_web_scraper-0.1.8-py3-none-any.whl (5.6 kB view details)

Uploaded Python 3

File details

Details for the file philiprehberger_web_scraper-0.1.8.tar.gz.

File metadata

File hashes

Hashes for philiprehberger_web_scraper-0.1.8.tar.gz
Algorithm Hash digest
SHA256 169dece86eb9c859b69e3e5ef71fb803001bc4d9c326d72b02befe95f1b40a1d
MD5 9eadbe7d714024f2f9ec3db892c0e916
BLAKE2b-256 a4ffa283a8f0c5eef3b902de52b4b05297f21ead7382b5eab06be453dcba9987

See more details on using hashes here.

File details

Details for the file philiprehberger_web_scraper-0.1.8-py3-none-any.whl.

File metadata

File hashes

Hashes for philiprehberger_web_scraper-0.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 6b9383830a9ff0113d386d0205b934c465d5bc048a3b3571443bf6311104231a
MD5 c4bdb91a957cd67b587ca6e56468f18f
BLAKE2b-256 080a6645dafb4c3fae014bfb2adb468a6849cbbc56ec478910d65624e8ce708d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page