Skip to main content

Scrapy HTML5 support

Project description

Parsel H5

Scrapy integration for the html5ever and lexbor HTML parsers.

This package provides a Scrapy Downloader Middleware that replaces the default lxml-based HTML parsing with a HTML5 one.

Why html5ever?

  • Better HTML5 compliance: Parses HTML the way browsers do
  • Handles malformed HTML gracefully: More forgiving with real-world HTML
  • As fast as Parsel: Rust-based parser with Python bindings (markupever)

Why Lexbor?

  • Fastest HTML5 parser: C-based parser with Python bindings (selectolax)
  • Better HTML5 compliance: Parses HTML the way browsers do
  • Handles malformed HTML gracefully: More forgiving with real-world HTML

Installation

pip install scrapy-h5

Or with uv:

uv add scrapy-h5

Quick start

1. Enable the middleware in your Scrapy project

Add to your settings.py:

DOWNLOADER_MIDDLEWARES = {
    'scrapy_h5.HtmlFiveResponseMiddleware': 650,
}

# Optional: disable globally (backend by default)
# SCRAPY_H5_BACKEND = None

2. Use in your spider

import scrapy


class MySpider(scrapy.Spider):
    name = 'myspider'
    start_urls = ['https://example.com']

    def parse(self, response):
        # CSS selectors work as expected
        titles = response.css('h1::text').getall()

        # Attribute extraction
        links = response.css('a::attr(href)').getall()

        # Chained selectors
        for item in response.css('div.product'):
            yield {
                'name': item.css('h2::text').get(),
                'price': item.css('.price::text').get(),
                'url': item.css('a::attr(href)').get(),
             }

3. Using with CrawlSpider

from scrapy.spiders import CrawlSpider, Rule
from scrapy_h5 import LinkExtractor

class MyCrawlSpider(CrawlSpider):
    name = 'mycrawler'
    start_urls = ['https://example.com']

    # Use HTML5 link extractor with rules
    rules = (
        Rule(LinkExtractor(allow=r'/products/'), callback='parse_product', follow=True),
    )

    def parse_product(self, response):
        yield {
            'name': response.css('h1::text').get(),
            'price': response.css('.price::text').get(),
        }

XPath and JMESPath support

XPath and JMESPath selectors are not supported. Use CSS selectors instead.

Per-request control

You can change/disable html5 backend per request using meta:

def start_requests(self):
    # HTML5 parsing backend (default)
    yield scrapy.Request(url, callback=self.parse)

    # Disable html5 for this request (use lxml instead)
    yield scrapy.Request(
        url2,
        callback=self.parse_legacy,
        meta={'scrapy_h5_backend': False}
    )


def parse_with_html5(self, response):
    # Force html5 even if SCRAPY_H5_BACKEND=False
    yield scrapy.Request(
        url,
        callback=self.parse,
        meta={'scrapy_h5_backend': 'html5ever'}
    )

API reference

Classes

  • HtmlFiveSelector: Selector class wrapping html5ever and lexbor elements
  • HtmlFiveSelectorList: List of selectors with bulk operations
  • HtmlFiveResponse: Response class with html5-based selector
  • HtmlFiveResponseMiddleware: Scrapy Downloader Middleware that replaces HtmlResponse with HtmlFiveResponse
  • LinkExtractor: Link extractor using HTML5 parsers (lexbor or html5ever)

Important: The LinkExtractor only works with HtmlFiveResponse. Enable the middleware to automatically convert all HTML responses to HtmlFiveResponse.

Exceptions

  • XPathConversionError: Raised when an XPath expression cannot be converted to CSS
  • HtmlFiveParseError: Raised when HTML parsing fails
  • HtmlFiveSelectorError: Base exception for selector errors
  • HtmlFiveSelectError: Raised when CSS selection fails

Settings

Setting Default Description
SCRAPY_H5_BACKEND lexbor Global html5 backend. lexbor and html5ever enables, False disables

Request meta

Key Type Description
scrapy_h5_backend bool Per-request override. lexbor and html5ever enables, False disables

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy_h5-0.4.0.tar.gz (18.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scrapy_h5-0.4.0-py3-none-any.whl (19.9 kB view details)

Uploaded Python 3

File details

Details for the file scrapy_h5-0.4.0.tar.gz.

File metadata

  • Download URL: scrapy_h5-0.4.0.tar.gz
  • Upload date:
  • Size: 18.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for scrapy_h5-0.4.0.tar.gz
Algorithm Hash digest
SHA256 1e09ee7b40712995990aa3815f726cf07e5d89e93279ef29f52530d9a3bf8215
MD5 fed980d42b0d09e8263775c103d7b24a
BLAKE2b-256 af7bafb8a32cbd4eeba8e564aefeade2519b654e1cc35608c390b1f5dd2c5920

See more details on using hashes here.

Provenance

The following attestation bundles were made for scrapy_h5-0.4.0.tar.gz:

Publisher: publish-to-pypi.yml on shkarupa-alex/scrapy-h5

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file scrapy_h5-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: scrapy_h5-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 19.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for scrapy_h5-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 06998c9f36ea23b65335f82bf8ae941e25cbb9b046d170dce59292453f3f8ed2
MD5 f11a7d6c731a156dc4f4a67d73effab5
BLAKE2b-256 675b878264276ce6ef989d40a041ef090b1cc5772ab0598f2b9d5e5afc66a364

See more details on using hashes here.

Provenance

The following attestation bundles were made for scrapy_h5-0.4.0-py3-none-any.whl:

Publisher: publish-to-pypi.yml on shkarupa-alex/scrapy-h5

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page