Skip to main content

Scrapy HTML5 support

Project description

Parsel H5

Scrapy integration for the html5ever and lexbor HTML parsers.

This package provides a Scrapy Downloader Middleware that replaces the default lxml-based HTML parsing with a HTML5 one.

Why html5ever?

  • Better HTML5 compliance: Parses HTML the way browsers do
  • Handles malformed HTML gracefully: More forgiving with real-world HTML
  • As fast as Parsel: Rust-based parser with Python bindings (markupever)

Why Lexbor?

  • Fastest HTML5 parser: C-based parser with Python bindings (selectolax)
  • Better HTML5 compliance: Parses HTML the way browsers do
  • Handles malformed HTML gracefully: More forgiving with real-world HTML

Installation

pip install scrapy-h5

Or with uv:

uv add scrapy-h5

Quick start

1. Enable the middleware in your Scrapy project

Add to your settings.py:

DOWNLOADER_MIDDLEWARES = {
    'scrapy_h5.HtmlFiveResponseMiddleware': 950,
}

# Optional: disable globally (backend by default)
# HTML5_BACKEND = False

2. Use in your spider

import scrapy


class MySpider(scrapy.Spider):
    name = 'myspider'
    start_urls = ['https://example.com']

    def parse(self, response):
        # CSS selectors work as expected
        titles = response.css('h1::text').getall()

        # Attribute extraction
        links = response.css('a::attr(href)').getall()

        # Chained selectors
        for item in response.css('div.product'):
            yield {
                'name': item.css('h2::text').get(),
                'price': item.css('.price::text').get(),
                'url': item.css('a::attr(href)').get(),
            }

XPath and JMESPath support

XPath and JMESPath selectors are not supported. Use CSS selectors instead.

Per-request control

You can change/disable html5 backend per request using meta:

def start_requests(self):
    # HTML5 parsing backend (default)
    yield scrapy.Request(url, callback=self.parse)

    # Disable html5 for this request (use lxml instead)
    yield scrapy.Request(
        url2,
        callback=self.parse_legacy,
        meta={'use_html5': False}
    )


def parse_with_html5(self, response):
    # Force html5 even if HTML5_BACKEND=False
    yield scrapy.Request(
        url,
        callback=self.parse,
        meta={'use_html5': 'html5ever'}
    )

API reference

Classes

  • HtmlFiveSelector: Selector class wrapping html5ever and lexbor elements
  • HtmlFiveSelectorList: List of selectors with bulk operations
  • HtmlFiveResponse: Response class with html5-based selector
  • HtmlFiveResponseMiddleware: Scrapy Downloader Middleware that replaces HtmlResponse with HtmlFiveResponse

Exceptions

  • XPathConversionError: Raised when an XPath expression cannot be converted to CSS
  • HtmlFiveParseError: Raised when HTML parsing fails
  • HtmlFiveSelectorError: Base exception for selector errors
  • HtmlFiveSelectError: Raised when CSS selection fails

Settings

Setting Default Description
HTML5_BACKEND lexbor Global html5 backend. lexbor and html5ever enables, False disables

Request meta

Key Type Description
use_html5 bool Per-request override. lexbor and html5ever enables, False disables

Middleware priority

The default priority (950) is chosen to run:

  • After HttpCompressionMiddleware (590) - responses are decompressed
  • After HttpCacheMiddleware (900) - cached responses are handled
  • Before most other processing

Adjust the priority if needed:

DOWNLOADER_MIDDLEWARES = {
    'scrapy_h5.HtmlFiveResponseMiddleware': 400,  # Earlier in the chain
}

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy_h5-0.1.1.tar.gz (16.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scrapy_h5-0.1.1-py3-none-any.whl (18.3 kB view details)

Uploaded Python 3

File details

Details for the file scrapy_h5-0.1.1.tar.gz.

File metadata

  • Download URL: scrapy_h5-0.1.1.tar.gz
  • Upload date:
  • Size: 16.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for scrapy_h5-0.1.1.tar.gz
Algorithm Hash digest
SHA256 d0bcd6e58aab321bf39b89237b9c3ddee555714e3ac7bddaf051e280c8c6faeb
MD5 6200215b65523fe41a8143bdf7cc68c7
BLAKE2b-256 f013c02290f9e98bb004763a5f2ee26dad718e0c72f8f657009625ee042a526e

See more details on using hashes here.

Provenance

The following attestation bundles were made for scrapy_h5-0.1.1.tar.gz:

Publisher: publish-to-pypi.yml on shkarupa-alex/scrapy-h5

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file scrapy_h5-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: scrapy_h5-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 18.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for scrapy_h5-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 5d6750f930a5f2280cfa7ce6246c473d672dc75b94099caa318f972d274c008b
MD5 74c647436aeab903e300ec330a8ab94e
BLAKE2b-256 021de91e2ed0a4a51fda5c966cca27c0855c99ae2e4b86441b01b11ac20dd726

See more details on using hashes here.

Provenance

The following attestation bundles were made for scrapy_h5-0.1.1-py3-none-any.whl:

Publisher: publish-to-pypi.yml on shkarupa-alex/scrapy-h5

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page