Skip to main content

Scrapy with selenium4.

Project description

Scrapy with selenium4

PyPI

Scrapy middleware to handle javascript pages using selenium >= 4.0.0.

Installation

$ pip install scrapy-selenium4

You should use python>=3.6. You will also need one of the Selenium compatible browsers.

Configuration

Add the browser to use, the path to the driver executable, and the arguments to pass to the executable to the scrapy settings.py:

# Add the `SeleniumMiddleware` to the downloader middlewares
DOWNLOADER_MIDDLEWARES = {
    'scrapy_selenium4.SeleniumMiddleware': 800
}

Other configurations(Default):

SELENIUM_DRIVER_NAME = 'chrome'
from shutil import which
SELENIUM_DRIVER_EXECUTABLE_PATH = which('chromedriver')
SELENIUM_DRIVER_ARGUMENTS=[
    '--headless=new',
    '--no-sandbox',
    '--disable-gpu',
    '--window-size=1280,1696',
    '--disable-blink-features',
    '--disable-blink-features=AutomationControlled',
    '--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
]
# In order to use a remote Selenium driver, specify SELENIUM_COMMAND_EXECUTOR instead of SELENIUM_DRIVER_EXECUTABLE_PATH.
# SELENIUM_COMMAND_EXECUTOR = 'http://localhost:4444/wd/hub'
# Number of drivers to keep in the pool, default is 1
# SELENIUM_DRIVER_COUNT = 1

Usage

Use the scrapy_selenium4.SeleniumRequest instead of the scrapy built-in Request like below:

from scrapy_selenium4 import SeleniumRequest

    def start_requests(self):
        for url in start_urls:
            yield SeleniumRequest(url=url, callback=self.parse_result)

The request will be handled by selenium, and the request will have an additional meta key, named driver containing the selenium driver with the request processed.

    def parse_result(self, response):
        print(response.request.meta['driver'].title)

For more information about the available driver methods and attributes, refer to the selenium python documentation

The selector response attribute work as usual (but contains the html processed by the selenium driver).

def parse_result(self, response):
    print(response.selector.xpath('//title/@text'))

Additional arguments

The scrapy_selenium4.SeleniumRequest accept 4 additional arguments:

wait_time / wait_until

When used, selenium will perform an Explicit wait before returning the response to the spider.

from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

yield SeleniumRequest(
    url=url,
    callback=self.parse_result,
    wait_time=10,
    wait_until=EC.element_to_be_clickable((By.ID, 'someid'))
)

screenshot

When used, selenium will take a screenshot of the page and the binary data of the .png captured will be added to the response meta:

yield SeleniumRequest(
    url=url,
    callback=self.parse_result,
    screenshot=True
)

def parse_result(self, response):
    with open('image.png', 'wb') as image_file:
        image_file.write(response.meta['screenshot'])

New way to screenshot:

yield SeleniumRequest(
    url=url,
    callback=self.parse_result,
    screenshot=f'image.png'
)

def parse_result(self, response):
    pass

script

When used, selenium will execute custom JavaScript code after page loaded.

yield SeleniumRequest(
    url=url,
    callback=self.parse_result,
    script='window.scrollTo(0, document.body.scrollHeight);',
)

scroll_bottom

When used, selenium will scroll to bottom.

yield SeleniumRequest(
    url=url,
    callback=self.parse_result,
    scroll_bottom=True
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy_selenium4-1.0.1rc6.tar.gz (8.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scrapy_selenium4-1.0.1rc6-py2.py3-none-any.whl (6.9 kB view details)

Uploaded Python 2Python 3

File details

Details for the file scrapy_selenium4-1.0.1rc6.tar.gz.

File metadata

  • Download URL: scrapy_selenium4-1.0.1rc6.tar.gz
  • Upload date:
  • Size: 8.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.10

File hashes

Hashes for scrapy_selenium4-1.0.1rc6.tar.gz
Algorithm Hash digest
SHA256 1c09b713f2f5082d7e3eeb16da5f63b2486f0bc51cda8f2b85785e26416daa44
MD5 df9092ef0f2e4aa89df5415bf0d60449
BLAKE2b-256 a8b67ed95fbf5ec7a7a054bd372e06e4f79d57633fe9fdc6eaca190c0873c44a

See more details on using hashes here.

File details

Details for the file scrapy_selenium4-1.0.1rc6-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for scrapy_selenium4-1.0.1rc6-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 c617ae2371e62eb63b6e27ba11f98d284d677a36c7f7cfda34cf293f76771fea
MD5 fc735b9944a65f608d29dfe33daac225
BLAKE2b-256 ecb893aec7e111954807c980e41fe66b03898e114fbf15f45621cb6c8b37ffae

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page