Skip to main content

A small example package

Project description

Scrapy with selenium

PyPI Build Status Test Coverage Maintainability

Scrapy middleware to handle javascript pages using selenium.

Installation

$ pip install scrapy-selenium

You should use python>=3.6. You will also need one of the Selenium compatible browsers.

Configuration

  1. Add the browser to use, the path to the driver executable, and the arguments to pass to the executable to the scrapy settings:
from shutil import which

SELENIUM_DRIVER_NAME = 'firefox'
SELENIUM_DRIVER_EXECUTABLE_PATH = which('geckodriver')
SELENIUM_DRIVER_ARGUMENTS=['-headless']  # '--headless' if using chrome instead of firefox

Optionally, set the path to the browser executable:

SELENIUM_BROWSER_EXECUTABLE_PATH = which('firefox')

In order to use a remote Selenium driver, specify SELENIUM_COMMAND_EXECUTOR instead of SELENIUM_DRIVER_EXECUTABLE_PATH:

SELENIUM_COMMAND_EXECUTOR = 'http://localhost:4444/wd/hub'

Alternatively, you can omit the SELENIUM_DRIVER_NAME, SELENIUM_DRIVER_EXECUTABLE_PATH and SELENIUM_DRIVER_ARGUMENTS settings and pass a Selenium Webdriver instance to each SeleniumRequest instance at the meta parameter 'driver' key.

  1. Add the SeleniumMiddleware to the downloader middlewares:
DOWNLOADER_MIDDLEWARES = {
    'scrapy_selenium.SeleniumMiddleware': 800
}

Usage

Use the scrapy_selenium.SeleniumRequest instead of the scrapy built-in Request like below:

from scrapy_selenium import SeleniumRequest

yield SeleniumRequest(url=url, callback=self.parse_result)

Optionally, you can provide a webdriver instance to the request, in the meta parameter with the key driver:

from scrapy_selenium import SeleniumRequest

yield SeleniumRequest(url=url, callback=self.parse_result, meta={'driver': driver})

Either you have to provide a driver instance or the settings SELENIUM_DRIVER_NAME, SELENIUM_DRIVER_EXECUTABLE_PATH and SELENIUM_DRIVER_ARGUMENTS should be set, so the Selenium middleware instance contains a webdriver instance.

Response

The request will be handled by Selenium, and the callback metod will be called, with a response object with a meta key named driver, containing the selenium driver with the request processed.

def parse_result(self, response):
    print(response.request.meta['driver'].title)

For more information about the available driver methods and attributes, refer to the selenium python documentation

The selector response attribute work as usual (but contains the html processed by the selenium driver).

def parse_result(self, response):
    print(response.selector.xpath('//title/@text'))

Additional arguments

The scrapy_selenium.SeleniumRequest accept 4 additional arguments:

wait_time / wait_until

When used, selenium will perform an Explicit wait before returning the response to the spider.

from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

yield SeleniumRequest(
    url=url,
    callback=self.parse_result,
    wait_time=10,
    wait_until=EC.element_to_be_clickable((By.ID, 'someid'))
)

screenshot

When used, selenium will take a screenshot of the page and the binary data of the .png captured will be added to the response meta:

yield SeleniumRequest(
    url=url,
    callback=self.parse_result,
    screenshot=True
)
def parse_result(self, response):
    with open('image.png', 'wb') as image_file:
        image_file.write(response.meta['screenshot'])

script

When used, selenium will execute custom JavaScript code.

yield SeleniumRequest(
    url=url,
    callback=self.parse_result,
    script='window.scrollTo(0, document.body.scrollHeight);',
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file scrapy_selenium_customdriverinstance-0.0.1.tar.gz.

File metadata

File hashes

Hashes for scrapy_selenium_customdriverinstance-0.0.1.tar.gz
Algorithm Hash digest
SHA256 5a549456b51b9ffa47387b1caf6f41db0ea845367358c519f2db132fbe459122
MD5 b2943a0025e2e1396f1e2105ac287a4a
BLAKE2b-256 89f0a19ccd7b6a252ac18edb80ea58ff6cd290b91d62f765dcdd12528dca2b99

See more details on using hashes here.

File details

Details for the file scrapy_selenium_customdriverinstance-0.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for scrapy_selenium_customdriverinstance-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 94770b79d9496bf4bb21b18596624be023606edb73ca39e56e1376cca39f018f
MD5 47531e1a7b5dac93fc163bc6ed3640e8
BLAKE2b-256 13643e925854f26ced951173a2110fea4ffb510609cc1dded996f0a0176c0177

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page