Skip to main content

A robust, all-in-one Python web scraping toolkit

Project description


PyScrappy: robust, all-in-one Python web scraping toolkit

Python 3.9+ PyPI Latest Release License: MIT

PyScrappy is a Python toolkit for web scraping that works out of the box. Point it at any URL and get structured data back — or use built-in scrapers for Wikipedia, IMDB, Yahoo Finance, news feeds, and more.

Key features

  • Generic scraper — give it any URL, get back structured text, links, images, tables, and metadata
  • Auto-pagination — automatically follows "next page" links
  • JS rendering — optional Playwright backend for JavaScript-heavy sites
  • Custom selectors — pass CSS selectors to extract exactly what you need
  • Built-in scrapers — Wikipedia, IMDB, Yahoo Finance, news (RSS), image search, Amazon, LinkedIn
  • Clean API — every scraper returns a ScrapeResult with .to_dataframe() and .to_json()
  • Retry & rate-limiting — built-in exponential backoff and per-domain rate limiting
  • Type-safe — full type hints, py.typed marker

Installation

pip install pyscrappy

Optional extras:

# Browser support (for JS-rendered pages)
pip install 'pyscrappy[browser]'
playwright install chromium

# DataFrame support
pip install 'pyscrappy[dataframe]'

# Everything
pip install 'pyscrappy[all]'

Quick start

Scrape any URL (one-liner)

from pyscrappy import scrape

result = scrape("https://en.wikipedia.org/wiki/Web_scraping")
print(result.data[0]["metadata"]["title"])
print(result.data[0]["text"]["word_count"])

Custom CSS selectors

from pyscrappy import GenericScraper

with GenericScraper() as gs:
    result = gs.scrape(
        url="https://news.ycombinator.com",
        selectors={"title": ".titleline a", "score": ".score"},
    )
    for item in result.data:
        print(item["title"], item.get("score", ""))

Wikipedia

from pyscrappy import WikipediaScraper

with WikipediaScraper() as ws:
    result = ws.scrape(query="Python (programming language)", mode="summary")
    print(result.data[0]["text"])

Stock data

from pyscrappy import StockScraper

with StockScraper() as ss:
    result = ss.scrape(symbol="AAPL", mode="history", period="1mo")
    df = result.to_dataframe()
    print(df.head())

IMDB

from pyscrappy import IMDBScraper

with IMDBScraper() as scraper:
    result = scraper.scrape(genre="sci-fi", max_pages=2)
    df = result.to_dataframe()
    print(df[["title", "year", "rating"]])

News (RSS feeds)

from pyscrappy import NewsScraper

with NewsScraper() as ns:
    result = ns.scrape(feed_url="https://rss.nytimes.com/services/xml/rss/nyt/World.xml")
    for article in result.data[:5]:
        print(article["title"])

Image search

from pyscrappy import ImageSearchScraper

with ImageSearchScraper() as iss:
    result = iss.scrape(query="golden retriever", max_images=10, download_to="./dogs")

Configuration

from pyscrappy import ScraperConfig, GenericScraper

config = ScraperConfig(
    timeout=20.0,            # request timeout in seconds
    max_retries=3,           # retry failed requests
    rate_limit=2.0,          # seconds between requests per domain
    proxy="http://...",      # HTTP/SOCKS proxy
    headless=True,           # browser runs headless
    render_js="auto",        # auto-detect if JS rendering is needed
)

with GenericScraper(config) as gs:
    result = gs.scrape(url="https://example.com")

YouTube

from pyscrappy import YouTubeScraper

with YouTubeScraper() as scraper:
    result = scraper.scrape(query="python tutorial", max_results=10)
    for video in result.data:
        print(video["title"], video.get("views", ""))

SoundCloud

from pyscrappy import SoundCloudScraper

with SoundCloudScraper() as scraper:
    result = scraper.scrape(query="lo-fi beats", max_results=10)

E-Commerce (Alibaba, Flipkart, Snapdeal)

from pyscrappy import AlibabaScraper, FlipkartScraper, SnapdealScraper

with FlipkartScraper() as scraper:
    result = scraper.scrape(query="laptop", max_pages=2)
    df = result.to_dataframe()

Food Delivery (Swiggy, Zomato)

from pyscrappy import SwiggyScraper, ZomatoScraper

# These are JS-heavy — use render_js=True for best results
with SwiggyScraper() as scraper:
    result = scraper.scrape(city="bangalore", render_js=True)

Built-in scrapers

Scraper What it does Needs browser?
GenericScraper Scrape any URL with auto-extraction Optional
Data / Research
WikipediaScraper Articles, sections, infoboxes No
IMDBScraper Movies by genre, search, charts No
StockScraper Quotes, history, profiles (Yahoo Finance) No
NewsScraper RSS/Atom feeds, article extraction No
ImageSearchScraper Image search + download No
LinkedInJobsScraper Public job listings No
E-Commerce
AmazonScraper Product search No
AlibabaScraper Product search No
FlipkartScraper Product search No
SnapdealScraper Product search No
Social Media
YouTubeScraper Video search, channel scraping Optional
InstagramScraper Profiles, hashtag posts Recommended
TwitterScraper Tweet search Recommended
Music
SpotifyScraper Track/playlist search Recommended
SoundCloudScraper Track search Optional
Food Delivery
SwiggyScraper Restaurant listings Recommended
ZomatoScraper Restaurant listings Recommended

Dependencies

Required: httpx, beautifulsoup4, lxml

Optional: playwright (JS rendering), pandas (DataFrames)

License

MIT

Contributing

All contributions welcome. See Issues.

This package is for educational and research purposes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyscrappy-1.0.1.tar.gz (86.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pyscrappy-1.0.1-py3-none-any.whl (55.2 kB view details)

Uploaded Python 3

File details

Details for the file pyscrappy-1.0.1.tar.gz.

File metadata

  • Download URL: pyscrappy-1.0.1.tar.gz
  • Upload date:
  • Size: 86.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for pyscrappy-1.0.1.tar.gz
Algorithm Hash digest
SHA256 37381d3b344605da2bf82054544283f22c06431e474b992aac0e9dcf4e2fc7b8
MD5 c61e41b8080d30d5b505c8ce2a776156
BLAKE2b-256 66546551937707828e91f481127fa9c26d64c449e4f2aa23aa3b346732bfbdb7

See more details on using hashes here.

File details

Details for the file pyscrappy-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: pyscrappy-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 55.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for pyscrappy-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b133b770e4ba621b0e5fef47439287ab4ed2c7cb4dac22d4aad957fbd7ea769c
MD5 2ce5bb5910572aeedf5e0700bc7aa1b8
BLAKE2b-256 f324d41f22d8d27be01bf196c1a5e0d4fa2b40d98f24272b576c08751b096e80

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page