Skip to main content

A package for web scraping

Project description

Webscraperr

This Python library is designed to facilitate the common workflow of web scraping, particularly for e-commerce websites. It provides a structured framework where users can define their own logic for gathering product URLs, parsing individual product pages, and selecting the next page. The URLs and product info are saved directly to a database. It supports various databases such as SQLite and MySQL.

Installation

Install webscraperr with pip

    pip install webscraperr

Usage

The configurations of the scraper is stored in a config dictionary. The config must be prepared, modified and validated before passing it to the scraper.

from webscraperr.config import get_default_config, validate_config, DBTypes

config = get_default_config()
config['DATABASE']['TYPE'] = DBTypes.SQLITE
config['DATABASE']['DATABASE'] = 'mydatabase.db'
config['DATABASE']['TABLE'] = 'products' # If TABLE is not set "items" will be the defaut table name
config['SCRAPER']['REQUEST_DELAY'] = 1.6

validate_config(config) # Will raise an error if config is not properly set

After preparing and validating the config, you must initialize the database

from webscraperr.db import init_sqlite

init_sqlite(config['DATABASE'])

# This will create the database and the table

For this example we are going to use WebScraperRequest. This scrapper will be using requests library for the http requests. You will need to define the functions for parsing the html. There is also WebScraperChrome that uses selenium-wire and undetected-chromedriver.

from webscraperr import WebScraperRequest
from urllib.parse import urljoin
import parsel

urls = ["https://webscraper.io/test-sites/e-commerce/static/computers/tablets"]

# The `get_next_page_func` must return a url or None. If it returns None it means there is no next page

def get_next_page_func(response):
    selector = parsel.Selector(text=response.text) # in this example `parsel` is used for parsing the html
    next_page_url = selector.css('a[rel="next"]::attr(href)').get()
    if next_page_url is not None:
        return urljoin(BASE_URL, next_page_url)
    return None

# The `parse_info_func` must return a `dict`.

def parse_info_func(response):
    selector = parsel.Selector(text=response.text)
    info = {
        'name': selector.css(".caption h4:nth-child(2)::text").get(),
        'price': selector.css(".caption .price::text").get()
    }
    return info


with WebScraperRequest(config) as scraper:
    scraper.get_items_urls_func = lambda selector : [urljoin(BASE_URL, i) for i in selector.css(".thumbnail a::attr(href)").getall()]
    scraper.get_next_page_func = get_next_page_func
    scraper.parse_info_func = parse_info_func

    scraper.scrape_items_urls(urls) # This will start the scraping of products urls

    scraper.scrape_items_infos() # This will navigate to the product page and parse the html

Development Status

Please note that this library is still under development and may be subject to changes. I am constantly working on improving its functionality, flexibility and performance. Your patience, feedback, and contributions are much appreciated.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

webscraperr-0.1.5.tar.gz (8.6 kB view hashes)

Uploaded Source

Built Distribution

webscraperr-0.1.5-py3-none-any.whl (9.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page