Skip to main content

Scrapy with selenium

Project description

Scrapy with selenium

Build Status Test Coverage Maintainability

Scrapy middleware to handle javascript pages using selenium. this is another version of scrapy-selenium with some fixes to adapt the newest version of selenium. i make it as an addon so additional functionality can be added afterward.

Installation

$ pip install scrapy-selenium-addon

You should use python>=3.6. You will also need one of the Selenium compatible browsers.

Configuration

  1. Add the browser to use, the path to the driver executable, and the arguments to pass to the executable to the scrapy settings:
    from shutil import which
    
    SELENIUM_DRIVER_NAME = 'firefox'
    SELENIUM_DRIVER_EXECUTABLE_PATH = which('geckodriver')
    SELENIUM_DRIVER_ARGUMENTS=['-headless']  # '--headless' if using chrome instead of firefox
    

Optionally, set the path to the browser executable: python SELENIUM_BROWSER_EXECUTABLE_PATH = which('firefox')

In order to use a remote Selenium driver, specify SELENIUM_COMMAND_EXECUTOR instead of SELENIUM_DRIVER_EXECUTABLE_PATH: python SELENIUM_COMMAND_EXECUTOR = 'http://localhost:4444/wd/hub'

  1. Add the SeleniumMiddleware to the ADDON:
    ADDON = {
        'scrapy_selenium_addon.seleniumAddon': 800
    }
    

Usage

Use the scrapy_selenium_addon.SeleniumRequest instead of the scrapy built-in Request like below:

from scrapy_selenium_addon import SeleniumRequest

yield SeleniumRequest(url=url, callback=self.parse_result)

The request will be handled by selenium, and the request will have an additional meta key, named driver containing the selenium driver with the request processed.

def parse_result(self, response):
    print(response.request.meta['driver'].title)

For more information about the available driver methods and attributes, refer to the selenium python documentation

The selector response attribute work as usual (but contains the html processed by the selenium driver).

def parse_result(self, response):
    print(response.selector.xpath('//title/@text'))

Additional arguments

The scrapy_selenium_addon.SeleniumRequest accept 4 additional arguments:

wait_time / wait_until

When used, selenium will perform an Explicit wait before returning the response to the spider.

from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

yield SeleniumRequest(
    url=url,
    callback=self.parse_result,
    wait_time=10,
    wait_until=EC.element_to_be_clickable((By.ID, 'someid'))
)

screenshot

When used, selenium will take a screenshot of the page and the binary data of the .png captured will be added to the response meta:

yield SeleniumRequest(
    url=url,
    callback=self.parse_result,
    screenshot=True
)

def parse_result(self, response):
    with open('image.png', 'wb') as image_file:
        image_file.write(response.meta['screenshot'])

script

When used, selenium will execute custom JavaScript code.

yield SeleniumRequest(
    url=url,
    callback=self.parse_result,
    script='window.scrollTo(0, document.body.scrollHeight);',
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy_selenium_addon-0.0.10.tar.gz (6.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scrapy_selenium_addon-0.0.10-py3-none-any.whl (7.8 kB view details)

Uploaded Python 3

File details

Details for the file scrapy_selenium_addon-0.0.10.tar.gz.

File metadata

  • Download URL: scrapy_selenium_addon-0.0.10.tar.gz
  • Upload date:
  • Size: 6.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.3

File hashes

Hashes for scrapy_selenium_addon-0.0.10.tar.gz
Algorithm Hash digest
SHA256 f2cb0b7a5ce6f683a0dd26ae847ae2efe36ea560ff6f8d99462406836f6010c2
MD5 b7edb51cf7fa60163ce5a0bc60822917
BLAKE2b-256 e1aac04729ea990ff21f1b703f2002087e627e74a2da26db29d4d19930c2d1b7

See more details on using hashes here.

File details

Details for the file scrapy_selenium_addon-0.0.10-py3-none-any.whl.

File metadata

File hashes

Hashes for scrapy_selenium_addon-0.0.10-py3-none-any.whl
Algorithm Hash digest
SHA256 22c49efdacb936c18aedbd47176ec6f13bc4cab2e40923c1fecc711ef1dcecc0
MD5 9660a5f97a580c49f26f59497455964e
BLAKE2b-256 12e53aa334e994fb8560caaead0b16aab85182530279d62d86a65c82e3d4d8d0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page