Skip to main content

Scrapy HTML5 support

Project description

Parsel H5

Scrapy integration for the html5ever and lexbor HTML parsers.

This package provides a Scrapy Downloader Middleware that replaces the default lxml-based HTML parsing with a HTML5 one.

Why html5ever?

  • Better HTML5 compliance: Parses HTML the way browsers do
  • Handles malformed HTML gracefully: More forgiving with real-world HTML
  • As fast as Parsel: Rust-based parser with Python bindings (markupever)

Why Lexbor?

  • Fastest HTML5 parser: C-based parser with Python bindings (selectolax)
  • Better HTML5 compliance: Parses HTML the way browsers do
  • Handles malformed HTML gracefully: More forgiving with real-world HTML

Installation

pip install scrapy-h5

Or with uv:

uv add scrapy-h5

Quick start

1. Enable the middleware in your Scrapy project

Add to your settings.py:

DOWNLOADER_MIDDLEWARES = {
    'scrapy_h5.HtmlFiveResponseMiddleware': 650,
}

# Optional: disable globally (backend by default)
# SCRAPY_H5_BACKEND = None

2. Use in your spider

import scrapy


class MySpider(scrapy.Spider):
    name = 'myspider'
    start_urls = ['https://example.com']

    def parse(self, response):
        # CSS selectors work as expected
        titles = response.css('h1::text').getall()

        # Attribute extraction
        links = response.css('a::attr(href)').getall()

        # Chained selectors
        for item in response.css('div.product'):
            yield {
                'name': item.css('h2::text').get(),
                'price': item.css('.price::text').get(),
                'url': item.css('a::attr(href)').get(),
            }

XPath and JMESPath support

XPath and JMESPath selectors are not supported. Use CSS selectors instead.

Per-request control

You can change/disable html5 backend per request using meta:

def start_requests(self):
    # HTML5 parsing backend (default)
    yield scrapy.Request(url, callback=self.parse)

    # Disable html5 for this request (use lxml instead)
    yield scrapy.Request(
        url2,
        callback=self.parse_legacy,
        meta={'scrapy_h5_backend': False}
    )


def parse_with_html5(self, response):
    # Force html5 even if SCRAPY_H5_BACKEND=False
    yield scrapy.Request(
        url,
        callback=self.parse,
        meta={'scrapy_h5_backend': 'html5ever'}
    )

API reference

Classes

  • HtmlFiveSelector: Selector class wrapping html5ever and lexbor elements
  • HtmlFiveSelectorList: List of selectors with bulk operations
  • HtmlFiveResponse: Response class with html5-based selector
  • HtmlFiveResponseMiddleware: Scrapy Downloader Middleware that replaces HtmlResponse with HtmlFiveResponse

Exceptions

  • XPathConversionError: Raised when an XPath expression cannot be converted to CSS
  • HtmlFiveParseError: Raised when HTML parsing fails
  • HtmlFiveSelectorError: Base exception for selector errors
  • HtmlFiveSelectError: Raised when CSS selection fails

Settings

Setting Default Description
SCRAPY_H5_BACKEND lexbor Global html5 backend. lexbor and html5ever enables, False disables

Request meta

Key Default Description
scrapy_h5_backend None Per-request override. lexbor and html5ever enables, False disables, None uses global setting

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy_h5-0.3.0.tar.gz (17.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scrapy_h5-0.3.0-py3-none-any.whl (19.4 kB view details)

Uploaded Python 3

File details

Details for the file scrapy_h5-0.3.0.tar.gz.

File metadata

  • Download URL: scrapy_h5-0.3.0.tar.gz
  • Upload date:
  • Size: 17.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for scrapy_h5-0.3.0.tar.gz
Algorithm Hash digest
SHA256 18a856b319fec3f61d19a4852b4930884b07415fea837b352d7c55c9ddb2ea5f
MD5 3fb015703078dc020625768fd09e709a
BLAKE2b-256 d4b289b994eeddb5bd1cdd40f59405eb5a113b27a41dd8815223bafad8aa8ee0

See more details on using hashes here.

Provenance

The following attestation bundles were made for scrapy_h5-0.3.0.tar.gz:

Publisher: publish-to-pypi.yml on shkarupa-alex/scrapy-h5

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file scrapy_h5-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: scrapy_h5-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 19.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for scrapy_h5-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6dd6f2fcf383e918765dc1ed600df8ce653b4911dda3d69e0f9f7dd9d5de407b
MD5 36aae18eebbc8b50da6e9a7a02e98e11
BLAKE2b-256 cf4a5ac7e8cb8da034d3af523022bd6c8c22ada5a52c57d69d580d80db202e32

See more details on using hashes here.

Provenance

The following attestation bundles were made for scrapy_h5-0.3.0-py3-none-any.whl:

Publisher: publish-to-pypi.yml on shkarupa-alex/scrapy-h5

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page