Skip to main content

Scrapy download handler that routes requests through rnet for browser-grade TLS/HTTP2 fingerprinting

Project description

scrapy-rnet

A Scrapy download handler that routes all HTTP/HTTPS requests through rnet, giving your spiders browser-grade TLS and HTTP/2 fingerprints via BoringSSL impersonation.

Without this, Scrapy uses Python's standard urllib3/twisted stack, which produces a fingerprint trivially identifiable as a bot. With scrapy-rnet, requests look indistinguishable from a real Chrome, Firefox, or Safari browser at the TLS and HTTP/2 layer.

Requirements

  • Python 3.13+
  • Scrapy 2.14+
  • rnet 2.4+

Installation

uv add scrapy-rnet
# or
pip install scrapy-rnet

Setup

Add to your Scrapy settings.py:

TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"

DOWNLOAD_HANDLERS = {
    "http":  "scrapy_rnet.RnetDownloadHandler",
    "https": "scrapy_rnet.RnetDownloadHandler",
}

That's it. All requests will now go through rnet, impersonating Chrome 131 by default.

Configuration

Setting Type Default Description
RNET_IMPERSONATE rnet.Impersonate Chrome131 Browser profile to impersonate
RNET_IMPERSONATE_OS rnet.ImpersonateOS None OS to pair with the browser profile
RNET_TIMEOUT int 30 Request timeout in seconds
RNET_FOLLOW_REDIRECTS bool False Let rnet follow redirects (disables Scrapy's RedirectMiddleware for these requests)
RNET_VERIFY_SSL bool True Verify TLS certificates
RNET_PROXIES list[rnet.Proxy] None Proxy list; takes precedence over Scrapy's proxy settings

Choosing a browser profile

import rnet

# Chrome (default)
RNET_IMPERSONATE = rnet.Impersonate.Chrome131

# Firefox
RNET_IMPERSONATE = rnet.Impersonate.Firefox133

# Safari
RNET_IMPERSONATE = rnet.Impersonate.Safari18

# Pair with a specific OS fingerprint
RNET_IMPERSONATE    = rnet.Impersonate.Chrome131
RNET_IMPERSONATE_OS = rnet.ImpersonateOS.Windows

Full list of available profiles: rnet docs.

Proxies

Global proxy (all requests):

import rnet

RNET_PROXIES = [rnet.Proxy.all("http://proxy.example.com:8080")]

Per-request proxy via request.meta['proxy']:

yield scrapy.Request(url, meta={"proxy": "http://user:pass@proxy.example.com:8080"})

Important: Scrapy's built-in HttpProxyMiddleware strips credentials from the proxy URL before the download handler sees the request. Since rnet manages proxy auth internally, you must disable it:

DOWNLOADER_MIDDLEWARES = {
    "scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware": None,
}

How it works

RnetDownloadHandler implements Scrapy's download handler interface (download_request / close). When Scrapy resolves a request for an http or https URL it calls download_request, which:

  1. Translates the Scrapy Request (method, URL, headers, body) into an rnet call
  2. Sends the request through rnet's BoringSSL-backed async client
  3. Converts the rnet response back into the appropriate Scrapy response subclass (HtmlResponse, TextResponse, etc.), preserving status, headers, body, IP address, and HTTP protocol version

A single rnet.Client instance is shared for the spider's lifetime, so connection pooling works as normal.

Testing

# Unit tests only (no network required)
uv run pytest tests/ -m "not integration"

# All tests including real network calls
uv run pytest tests/

The integration suite includes a fingerprint verification test that hits tls.peet.ws and asserts the TLS/HTTP2 fingerprint matches Chrome 131.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy_impersonate_rnet-0.1.2.tar.gz (44.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scrapy_impersonate_rnet-0.1.2-py3-none-any.whl (5.9 kB view details)

Uploaded Python 3

File details

Details for the file scrapy_impersonate_rnet-0.1.2.tar.gz.

File metadata

File hashes

Hashes for scrapy_impersonate_rnet-0.1.2.tar.gz
Algorithm Hash digest
SHA256 f4631f96a835d76b3708c73b76aa7fb530e57498b010d50ebacdf49e75e5b9cc
MD5 efc372beda3f1d0e59943136cb8b05b2
BLAKE2b-256 dc398adb9b065ecf439256ffb72564f0443f4950633620c5bcc57c5c5dab5322

See more details on using hashes here.

File details

Details for the file scrapy_impersonate_rnet-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for scrapy_impersonate_rnet-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 5db9984ebb01d72bae2e294a3134aa43e8bc46a58b02a1aef3f08a2e2b0a19f0
MD5 4b269d04140097beae9633c2dd59fefb
BLAKE2b-256 c6d0102521fcba07a5f0b29d6833fc1257d5e952ac1a4493af2250ae3bb60058

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page