Skip to main content

Add your description here

Project description

scrapy-pydoll

A Scrapy download handler that integrates pydoll-python for handling JavaScript-rendered pages in Scrapy spiders.

Installation

pip install scrapy-pydoll

Requirements

  • Python >= 3.12
  • scrapy >= 2.12.0
  • pydoll-python >= 1.3.2

Usage

  1. Configure the download handler in your Scrapy settings:
DOWNLOAD_HANDLERS = {
    "http": "scrapy_pydoll.handler.PydollDownloadHandler",
    "https": "scrapy_pydoll.handler.PydollDownloadHandler"
}

# Required for async support
TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"
  1. Enable Pydoll for specific requests by setting pydoll=True in the request meta:
import scrapy
from scrapy_pydoll.page import PageMethod
from pydoll.constants import By

class MySpider(scrapy.Spider):
    name = "myspider"
    
    def start_requests(self):
        url = "https://example.com"
        yield scrapy.Request(
            url,
            meta={
                "pydoll": True,
                "pydoll_page_methods": [
                    PageMethod("wait_element", By.XPATH, "//div[@class='content']"),
                ]
            }
        )

Configuration

The following settings can be configured in your Scrapy settings:

  • PYDOLL_HEADLESS (bool): Run Chrome in headless mode (default: True)
  • PYDOLL_PROXY (str): Proxy server URL (default: None)
  • PYDOLL_MAX_PAGES (int): Maximum number of concurrent browser pages (default: 4)
  • PYDOLL_NAVIGATION_TIMEOUT (int): Page navigation timeout in seconds (default: 60)

Features

  • Handles JavaScript-rendered pages using Chrome DevTools Protocol
  • Supports custom page methods via PageMethod class
  • Configurable concurrent page limits
  • Proxy support
  • Detailed logging and statistics

Example

Here's a complete spider example that scrapes quotes from a JavaScript-rendered page:

import scrapy
from scrapy_pydoll.page import PageMethod
from pydoll.constants import By

class QuotesSpider(scrapy.Spider):
    name = "quotes"

    def start_requests(self):
        url = "http://quotes.toscrape.com/js/"
        yield scrapy.Request(
            url,
            meta={
                "pydoll": True,
                "pydoll_page_methods": [
                    PageMethod("wait_element", By.XPATH, "//div[@class='quote']"),
                ]
            }
        )

    def parse(self, response):
        for quote in response.xpath("//div[@class='quote']"):
            yield {
                "text": quote.xpath(".//span[@class='text']/text()").get(),
                "author": quote.xpath(".//small[@class='author']/text()").get(),
            }

License

This project is licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy_pydoll-0.0.3.tar.gz (5.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scrapy_pydoll-0.0.3-py3-none-any.whl (6.4 kB view details)

Uploaded Python 3

File details

Details for the file scrapy_pydoll-0.0.3.tar.gz.

File metadata

  • Download URL: scrapy_pydoll-0.0.3.tar.gz
  • Upload date:
  • Size: 5.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.5.14

File hashes

Hashes for scrapy_pydoll-0.0.3.tar.gz
Algorithm Hash digest
SHA256 775094eb5c83db1d8d9845c5e6f8f85f3685d0c1afddee6594ceefd538ba95f6
MD5 464fbbe5af9265d2b17eb015104a928d
BLAKE2b-256 8ef5c97b1ce72d748ad8daea5e673672bfb99c1a4c9f8058f1cd6a3525c7ce39

See more details on using hashes here.

File details

Details for the file scrapy_pydoll-0.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for scrapy_pydoll-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 999a00fe900644bbf58fc38e326df670951c57d1a897f7a80991b6bc021e1348
MD5 386abbbe4113cce1fe754159424acf5d
BLAKE2b-256 509ee9be39198fc0f2e26484f726417248bcdba4758d5dd106789911db4c4505

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page