Skip to main content

A robust, all-in-one Python web scraping toolkit

Project description


PyScrappy: robust, all-in-one Python web scraping toolkit

Python 3.9+ PyPI Latest Release License: MIT

PyScrappy is a Python toolkit for web scraping that works out of the box. Point it at any URL and get structured data back — or use built-in scrapers for Wikipedia, IMDB, Yahoo Finance, news feeds, and more.

Key features

  • Generic scraper — give it any URL, get back structured text, links, images, tables, and metadata
  • Auto-pagination — automatically follows "next page" links
  • JS rendering — optional Playwright backend for JavaScript-heavy sites
  • Custom selectors — pass CSS selectors to extract exactly what you need
  • Built-in scrapers — Wikipedia, IMDB, Yahoo Finance, news (RSS), image search, Amazon, LinkedIn
  • Clean API — every scraper returns a ScrapeResult with .to_dataframe() and .to_json()
  • Retry & rate-limiting — built-in exponential backoff and per-domain rate limiting
  • Type-safe — full type hints, py.typed marker

Installation

pip install pyscrappy

Optional extras:

# Browser support (for JS-rendered pages)
pip install 'pyscrappy[browser]'
playwright install chromium

# DataFrame support
pip install 'pyscrappy[dataframe]'

# Everything
pip install 'pyscrappy[all]'

Quick start

Scrape any URL (one-liner)

from pyscrappy import scrape

result = scrape("https://en.wikipedia.org/wiki/Web_scraping")
print(result.data[0]["metadata"]["title"])
print(result.data[0]["text"]["word_count"])

Custom CSS selectors

from pyscrappy import GenericScraper

with GenericScraper() as gs:
    result = gs.scrape(
        url="https://news.ycombinator.com",
        selectors={"title": ".titleline a", "score": ".score"},
    )
    for item in result.data:
        print(item["title"], item.get("score", ""))

Wikipedia

from pyscrappy import WikipediaScraper

with WikipediaScraper() as ws:
    result = ws.scrape(query="Python (programming language)", mode="summary")
    print(result.data[0]["text"])

Stock data

from pyscrappy import StockScraper

with StockScraper() as ss:
    result = ss.scrape(symbol="AAPL", mode="history", period="1mo")
    df = result.to_dataframe()
    print(df.head())

IMDB

from pyscrappy import IMDBScraper

with IMDBScraper() as scraper:
    result = scraper.scrape(genre="sci-fi", max_pages=2)
    df = result.to_dataframe()
    print(df[["title", "year", "rating"]])

News (RSS feeds)

from pyscrappy import NewsScraper

with NewsScraper() as ns:
    result = ns.scrape(feed_url="https://rss.nytimes.com/services/xml/rss/nyt/World.xml")
    for article in result.data[:5]:
        print(article["title"])

Image search

from pyscrappy import ImageSearchScraper

with ImageSearchScraper() as iss:
    result = iss.scrape(query="golden retriever", max_images=10, download_to="./dogs")

Configuration

from pyscrappy import ScraperConfig, GenericScraper

config = ScraperConfig(
    timeout=20.0,            # request timeout in seconds
    max_retries=3,           # retry failed requests
    rate_limit=2.0,          # seconds between requests per domain
    proxy="http://...",      # HTTP/SOCKS proxy
    headless=True,           # browser runs headless
    render_js="auto",        # auto-detect if JS rendering is needed
)

with GenericScraper(config) as gs:
    result = gs.scrape(url="https://example.com")

YouTube

from pyscrappy import YouTubeScraper

with YouTubeScraper() as scraper:
    result = scraper.scrape(query="python tutorial", max_results=10)
    for video in result.data:
        print(video["title"], video.get("views", ""))

SoundCloud

from pyscrappy import SoundCloudScraper

with SoundCloudScraper() as scraper:
    result = scraper.scrape(query="lo-fi beats", max_results=10)

E-Commerce (Alibaba, Flipkart, Snapdeal)

from pyscrappy import AlibabaScraper, FlipkartScraper, SnapdealScraper

with FlipkartScraper() as scraper:
    result = scraper.scrape(query="laptop", max_pages=2)
    df = result.to_dataframe()

Food Delivery (Swiggy, Zomato)

from pyscrappy import SwiggyScraper, ZomatoScraper

# These are JS-heavy — use render_js=True for best results
with SwiggyScraper() as scraper:
    result = scraper.scrape(city="bangalore", render_js=True)

Built-in scrapers

Scraper What it does Needs browser?
GenericScraper Scrape any URL with auto-extraction Optional
Data / Research
WikipediaScraper Articles, sections, infoboxes No
IMDBScraper Movies by genre, search, charts No
StockScraper Quotes, history, profiles (Yahoo Finance) No
NewsScraper RSS/Atom feeds, article extraction No
ImageSearchScraper Image search + download No
LinkedInJobsScraper Public job listings No
E-Commerce
AmazonScraper Product search No
AlibabaScraper Product search No
FlipkartScraper Product search No
SnapdealScraper Product search No
Social Media
YouTubeScraper Video search, channel scraping Optional
InstagramScraper Profiles, hashtag posts Recommended
TwitterScraper Tweet search Recommended
Music
SpotifyScraper Track/playlist search Recommended
SoundCloudScraper Track search Optional
Food Delivery
SwiggyScraper Restaurant listings Recommended
ZomatoScraper Restaurant listings Recommended

Dependencies

Required: httpx, beautifulsoup4, lxml

Optional: playwright (JS rendering), pandas (DataFrames)

License

MIT

Contributing

All contributions welcome. See Issues.

This package is for educational and research purposes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyscrappy-1.0.2.tar.gz (86.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pyscrappy-1.0.2-py3-none-any.whl (55.2 kB view details)

Uploaded Python 3

File details

Details for the file pyscrappy-1.0.2.tar.gz.

File metadata

  • Download URL: pyscrappy-1.0.2.tar.gz
  • Upload date:
  • Size: 86.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for pyscrappy-1.0.2.tar.gz
Algorithm Hash digest
SHA256 c7e1d7698bf7540ad166c44ac6fea9ce4feffaf436039f66df68a1bdbadd8aab
MD5 baa368ab7f3893d576af105bc2261d5b
BLAKE2b-256 784fcd78544b0b847cccb25d1e43fb05020b0bb5fa47560294e93b06635ebd0e

See more details on using hashes here.

File details

Details for the file pyscrappy-1.0.2-py3-none-any.whl.

File metadata

  • Download URL: pyscrappy-1.0.2-py3-none-any.whl
  • Upload date:
  • Size: 55.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for pyscrappy-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 9d7beb60775e423bd7221ec7d2648f6c45f084728bcc6518248200cb67c41a11
MD5 dcd0382b4443e3a39706a341adba1d4f
BLAKE2b-256 dd20d41e1f9bbde03729019099e1c644702028b222b5efdbd60140898ea9568d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page