Skip to main content

A robust, all-in-one Python web scraping toolkit

Project description


PyScrappy: robust, all-in-one Python web scraping toolkit

Python 3.9+ PyPI Latest Release License: MIT

PyScrappy is a Python toolkit for web scraping that works out of the box. Point it at any URL and get structured data back — or use built-in scrapers for Wikipedia, IMDB, Yahoo Finance, news feeds, and more.

Key features

  • Generic scraper — give it any URL, get back structured text, links, images, tables, and metadata
  • Auto-pagination — automatically follows "next page" links
  • JS rendering — optional Playwright backend for JavaScript-heavy sites
  • Custom selectors — pass CSS selectors to extract exactly what you need
  • Built-in scrapers — Wikipedia, IMDB, Yahoo Finance, news (RSS), image search, Amazon, LinkedIn
  • Clean API — every scraper returns a ScrapeResult with .to_dataframe() and .to_json()
  • Retry & rate-limiting — built-in exponential backoff and per-domain rate limiting
  • Type-safe — full type hints, py.typed marker

Installation

pip install pyscrappy

Optional extras:

# Browser support (for JS-rendered pages)
pip install 'pyscrappy[browser]'
playwright install chromium

# DataFrame support
pip install 'pyscrappy[dataframe]'

# Everything
pip install 'pyscrappy[all]'

Quick start

Scrape any URL (one-liner)

from pyscrappy import scrape

result = scrape("https://en.wikipedia.org/wiki/Web_scraping")
print(result.data[0]["metadata"]["title"])
print(result.data[0]["text"]["word_count"])

Custom CSS selectors

from pyscrappy import GenericScraper

with GenericScraper() as gs:
    result = gs.scrape(
        url="https://news.ycombinator.com",
        selectors={"title": ".titleline a", "score": ".score"},
    )
    for item in result.data:
        print(item["title"], item.get("score", ""))

Wikipedia

from pyscrappy import WikipediaScraper

with WikipediaScraper() as ws:
    result = ws.scrape(query="Python (programming language)", mode="summary")
    print(result.data[0]["text"])

Stock data

from pyscrappy import StockScraper

with StockScraper() as ss:
    result = ss.scrape(symbol="AAPL", mode="history", period="1mo")
    df = result.to_dataframe()
    print(df.head())

IMDB

from pyscrappy import IMDBScraper

with IMDBScraper() as scraper:
    result = scraper.scrape(genre="sci-fi", max_pages=2)
    df = result.to_dataframe()
    print(df[["title", "year", "rating"]])

News (RSS feeds)

from pyscrappy import NewsScraper

with NewsScraper() as ns:
    result = ns.scrape(feed_url="https://rss.nytimes.com/services/xml/rss/nyt/World.xml")
    for article in result.data[:5]:
        print(article["title"])

Image search

from pyscrappy import ImageSearchScraper

with ImageSearchScraper() as iss:
    result = iss.scrape(query="golden retriever", max_images=10, download_to="./dogs")

Configuration

from pyscrappy import ScraperConfig, GenericScraper

config = ScraperConfig(
    timeout=20.0,            # request timeout in seconds
    max_retries=3,           # retry failed requests
    rate_limit=2.0,          # seconds between requests per domain
    proxy="http://...",      # HTTP/SOCKS proxy
    headless=True,           # browser runs headless
    render_js="auto",        # auto-detect if JS rendering is needed
)

with GenericScraper(config) as gs:
    result = gs.scrape(url="https://example.com")

YouTube

from pyscrappy import YouTubeScraper

with YouTubeScraper() as scraper:
    result = scraper.scrape(query="python tutorial", max_results=10)
    for video in result.data:
        print(video["title"], video.get("views", ""))

SoundCloud

from pyscrappy import SoundCloudScraper

with SoundCloudScraper() as scraper:
    result = scraper.scrape(query="lo-fi beats", max_results=10)

E-Commerce (Alibaba, Flipkart, Snapdeal)

from pyscrappy import AlibabaScraper, FlipkartScraper, SnapdealScraper

with FlipkartScraper() as scraper:
    result = scraper.scrape(query="laptop", max_pages=2)
    df = result.to_dataframe()

Food Delivery (Swiggy, Zomato)

from pyscrappy import SwiggyScraper, ZomatoScraper

# These are JS-heavy — use render_js=True for best results
with SwiggyScraper() as scraper:
    result = scraper.scrape(city="bangalore", render_js=True)

Built-in scrapers

Scraper What it does Needs browser?
GenericScraper Scrape any URL with auto-extraction Optional
Data / Research
WikipediaScraper Articles, sections, infoboxes No
IMDBScraper Movies by genre, search, charts No
StockScraper Quotes, history, profiles (Yahoo Finance) No
NewsScraper RSS/Atom feeds, article extraction No
ImageSearchScraper Image search + download No
LinkedInJobsScraper Public job listings No
E-Commerce
AmazonScraper Product search No
AlibabaScraper Product search No
FlipkartScraper Product search No
SnapdealScraper Product search No
Social Media
YouTubeScraper Video search, channel scraping Optional
InstagramScraper Profiles, hashtag posts Recommended
TwitterScraper Tweet search Recommended
Music
SpotifyScraper Track/playlist search Recommended
SoundCloudScraper Track search Optional
Food Delivery
SwiggyScraper Restaurant listings Recommended
ZomatoScraper Restaurant listings Recommended

Dependencies

Required: httpx, beautifulsoup4, lxml

Optional: playwright (JS rendering), pandas (DataFrames)

License

MIT

Contributing

All contributions welcome. See Issues.

This package is for educational and research purposes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyscrappy-1.0.3.tar.gz (86.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pyscrappy-1.0.3-py3-none-any.whl (55.2 kB view details)

Uploaded Python 3

File details

Details for the file pyscrappy-1.0.3.tar.gz.

File metadata

  • Download URL: pyscrappy-1.0.3.tar.gz
  • Upload date:
  • Size: 86.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for pyscrappy-1.0.3.tar.gz
Algorithm Hash digest
SHA256 574c6b2734cc572966e6f59ac6f44fdc570aa3cff8498dab3b1df9c13a027bdd
MD5 780597930f8aaecd01c16a5d5b36073c
BLAKE2b-256 64c8d867cf305e0c441b2035a54a73ea3f60ac699733191762fadcb07931eb16

See more details on using hashes here.

File details

Details for the file pyscrappy-1.0.3-py3-none-any.whl.

File metadata

  • Download URL: pyscrappy-1.0.3-py3-none-any.whl
  • Upload date:
  • Size: 55.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for pyscrappy-1.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 383dcbd120959b693604bb258e684a469a6430f6093c0c166972d28e95a98cd0
MD5 81457f567dfc2336e1e23dcd56365dc7
BLAKE2b-256 8bdaa2576015923a7aa4ebcb354e3249e1e0b8e85072001cbcce7b13814f33ed

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page