Skip to main content

Scrapy with selenium and more

Project description

Scrapy with selenium

PyPI Maintainability

Scrapy middleware to handle javascript pages using selenium.

Installation

$ pip install scrapy-selenium-enhanced

You should use python>=3.13. You will also need one of the Selenium compatible browsers.

Configuration

  1. Add the browser to use, the path to the driver executable, and the arguments to pass to the executable to the scrapy settings:
from shutil import which

SELENIUM_DRIVER_NAME = 'chrome'
SELENIUM_DRIVER_EXECUTABLE_PATH = which('chromedriver')
# '-headless' if using firefox instead of chrome
SELENIUM_DRIVER_ARGUMENTS=['--headless', f'user-agent={USER_AGENT}']

Optionally, set the path to the browser executable:

SELENIUM_BROWSER_EXECUTABLE_PATH = which('chrome')

In order to use a remote Selenium driver, specify SELENIUM_COMMAND_EXECUTOR instead of SELENIUM_DRIVER_EXECUTABLE_PATH:

SELENIUM_COMMAND_EXECUTOR = 'http://localhost:4444/wd/hub'
  1. Add the SeleniumMiddleware to the downloader middlewares:
DOWNLOADER_MIDDLEWARES = {
    'scrapy_selenium_enhanced.SeleniumMiddleware': 800
}

Usage

Use the scrapy_selenium.SeleniumRequest instead of the scrapy built-in Request like below:

from scrapy_selenium_enhanced import SeleniumRequest
yield SeleniumRequest(url=url, callback=self.parse_result)

The request will be handled by selenium, and the request will have an additional meta key, named driver containing the selenium driver with the request processed.

def parse_result(self, response):
    print(response.request.meta['driver'].title)

For more information about the available driver methods and attributes, refer to the selenium python documentation

The selector response attribute work as usual (but contains the html processed by the selenium driver).

def parse_result(self, response):
    print(response.xpath('//title/@text'))

Additional arguments

The scrapy_selenium_enhanced.SeleniumRequest accept 4 additional arguments:

wait_time / wait_until

When used, selenium will perform an Explicit wait before returning the response to the spider.

from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

yield SeleniumRequest(
    url=url,
    callback=self.parse_result,
    wait_time=10,
    wait_until=EC.element_to_be_clickable((By.ID, 'someid'))
)

screenshot

When used, selenium will take a screenshot of the page and the binary data of the .png captured will be added to the response meta:

yield SeleniumRequest(
    url=url,
    callback=self.parse_result,
    screenshot=True
)

def parse_result(self, response):
    with open('image.png', 'wb') as image_file:
        image_file.write(response.meta['screenshot'])

script

When used, selenium will execute custom JavaScript code.

yield SeleniumRequest(
    url=url,
    callback=self.parse_result,
    script='window.scrollTo(0, document.body.scrollHeight);',
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy_selenium_enhanced-0.0.5.tar.gz (8.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scrapy_selenium_enhanced-0.0.5-py3-none-any.whl (11.4 kB view details)

Uploaded Python 3

File details

Details for the file scrapy_selenium_enhanced-0.0.5.tar.gz.

File metadata

File hashes

Hashes for scrapy_selenium_enhanced-0.0.5.tar.gz
Algorithm Hash digest
SHA256 5c705aff53e7c591148ed530d379292a0f023695a96ef2b7a0de03581769cd8e
MD5 ff110dff788bf78aadf9a8fe6eb1aeb6
BLAKE2b-256 b4abcba6cdd0fdf5c6f818135ad81da20512ff5682c12fd1052a68f0f5c2a8cb

See more details on using hashes here.

File details

Details for the file scrapy_selenium_enhanced-0.0.5-py3-none-any.whl.

File metadata

File hashes

Hashes for scrapy_selenium_enhanced-0.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 b942463090c052bdc66f87db8782cabc5971d5f18109f4ef3b8c77c0371b0022
MD5 dcea2893bbcbc1c0eea1af86a9aaf376
BLAKE2b-256 17a98a5694f89c3e8b82de84f01d9a7c9f104292ab875ebe59028d3c88fb8b6d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page