Skip to main content

Scrapy HTML5 support

Project description

Parsel H5

Scrapy integration for the html5ever and lexbor HTML parsers.

This package provides a Scrapy Downloader Middleware that replaces the default lxml-based HTML parsing with a HTML5 one.

Why html5ever?

  • Better HTML5 compliance: Parses HTML the way browsers do
  • Handles malformed HTML gracefully: More forgiving with real-world HTML
  • As fast as Parsel: Rust-based parser with Python bindings (markupever)

Why Lexbor?

  • Fastest HTML5 parser: C-based parser with Python bindings (selectolax)
  • Better HTML5 compliance: Parses HTML the way browsers do
  • Handles malformed HTML gracefully: More forgiving with real-world HTML

Installation

pip install scrapy-h5

Or with uv:

uv add scrapy-h5

Quick start

1. Enable the middleware in your Scrapy project

Add to your settings.py:

DOWNLOADER_MIDDLEWARES = {
    'scrapy_h5.HtmlFiveResponseMiddleware': 650,
}

# Optional: disable globally (backend by default)
# SCRAPY_H5_BACKEND = None

2. Use in your spider

import scrapy


class MySpider(scrapy.Spider):
    name = 'myspider'
    start_urls = ['https://example.com']

    def parse(self, response):
        # CSS selectors work as expected
        titles = response.css('h1::text').getall()

        # Attribute extraction
        links = response.css('a::attr(href)').getall()

        # Chained selectors
        for item in response.css('div.product'):
            yield {
                'name': item.css('h2::text').get(),
                'price': item.css('.price::text').get(),
                'url': item.css('a::attr(href)').get(),
             }

3. Using with CrawlSpider

from scrapy.spiders import CrawlSpider, Rule
from scrapy_h5 import LinkExtractor

class MyCrawlSpider(CrawlSpider):
    name = 'mycrawler'
    start_urls = ['https://example.com']

    # Use HTML5 link extractor with rules
    rules = (
        Rule(LinkExtractor(allow=r'/products/'), callback='parse_product', follow=True),
    )

    def parse_product(self, response):
        yield {
            'name': response.css('h1::text').get(),
            'price': response.css('.price::text').get(),
        }

XPath and JMESPath support

XPath and JMESPath selectors are not supported. Use CSS selectors instead.

Per-request control

You can change/disable html5 backend per request using meta:

def start_requests(self):
    # HTML5 parsing backend (default)
    yield scrapy.Request(url, callback=self.parse)

    # Disable html5 for this request (use lxml instead)
    yield scrapy.Request(
        url2,
        callback=self.parse_legacy,
        meta={'scrapy_h5_backend': False}
    )


def parse_with_html5(self, response):
    # Force html5 even if SCRAPY_H5_BACKEND=False
    yield scrapy.Request(
        url,
        callback=self.parse,
        meta={'scrapy_h5_backend': 'html5ever'}
    )

API reference

Classes

  • HtmlFiveSelector: Selector class wrapping html5ever and lexbor elements
  • HtmlFiveSelectorList: List of selectors with bulk operations
  • HtmlFiveResponse: Response class with html5-based selector
  • HtmlFiveResponseMiddleware: Scrapy Downloader Middleware that replaces HtmlResponse with HtmlFiveResponse
  • LinkExtractor: Link extractor using HTML5 parsers (lexbor or html5ever)

Important: The LinkExtractor only works with HtmlFiveResponse. Enable the middleware to automatically convert all HTML responses to HtmlFiveResponse.

Exceptions

  • XPathConversionError: Raised when an XPath expression cannot be converted to CSS
  • HtmlFiveParseError: Raised when HTML parsing fails
  • HtmlFiveSelectorError: Base exception for selector errors
  • HtmlFiveSelectError: Raised when CSS selection fails

Settings

Setting Default Description
SCRAPY_H5_BACKEND lexbor Global html5 backend. lexbor and html5ever enables, False disables

Request meta

Key Type Description
scrapy_h5_backend bool Per-request override. lexbor and html5ever enables, False disables

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy_h5-0.5.0.tar.gz (18.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scrapy_h5-0.5.0-py3-none-any.whl (20.1 kB view details)

Uploaded Python 3

File details

Details for the file scrapy_h5-0.5.0.tar.gz.

File metadata

  • Download URL: scrapy_h5-0.5.0.tar.gz
  • Upload date:
  • Size: 18.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for scrapy_h5-0.5.0.tar.gz
Algorithm Hash digest
SHA256 9388f75a8755098fb5926898de7ed3afe64a900456dd18227531425363582a32
MD5 7838c64796aa8caf604893e57529fcdc
BLAKE2b-256 c22c4c004ede244fbec70abc2c1b058e015269bf612748db12865ff9318a2888

See more details on using hashes here.

Provenance

The following attestation bundles were made for scrapy_h5-0.5.0.tar.gz:

Publisher: publish-to-pypi.yml on shkarupa-alex/scrapy-h5

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file scrapy_h5-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: scrapy_h5-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 20.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for scrapy_h5-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 694a5c2b99dd8cede58a4195a8987d2045f8aa82d7d909f57773127727b4056c
MD5 d7bbd6b14352a4dda6b70667576be6ad
BLAKE2b-256 41bb53985e1e25623e42bbde5f4133429cb47c3b9a17d7a23a1853a4478c7262

See more details on using hashes here.

Provenance

The following attestation bundles were made for scrapy_h5-0.5.0-py3-none-any.whl:

Publisher: publish-to-pypi.yml on shkarupa-alex/scrapy-h5

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page