Skip to main content

Scrapy download handler that routes requests through rnet for browser-grade TLS/HTTP2 fingerprinting

Project description

scrapy-rnet

A Scrapy download handler that routes all HTTP/HTTPS requests through rnet, giving your spiders browser-grade TLS and HTTP/2 fingerprints via BoringSSL impersonation.

Without this, Scrapy uses Python's standard urllib3/twisted stack, which produces a fingerprint trivially identifiable as a bot. With scrapy-rnet, requests look indistinguishable from a real Chrome, Firefox, or Safari browser at the TLS and HTTP/2 layer.

Requirements

  • Python 3.13+
  • Scrapy 2.14+
  • rnet 2.4+

Installation

uv add scrapy-rnet
# or
pip install scrapy-rnet

Setup

Add to your Scrapy settings.py:

TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"

DOWNLOAD_HANDLERS = {
    "http":  "scrapy_rnet.RnetDownloadHandler",
    "https": "scrapy_rnet.RnetDownloadHandler",
}

That's it. All requests will now go through rnet, impersonating Chrome 131 by default.

Configuration

Setting Type Default Description
RNET_IMPERSONATE rnet.Impersonate Chrome131 Browser profile to impersonate
RNET_IMPERSONATE_OS rnet.ImpersonateOS None OS to pair with the browser profile
RNET_TIMEOUT int 30 Request timeout in seconds
RNET_FOLLOW_REDIRECTS bool False Let rnet follow redirects (disables Scrapy's RedirectMiddleware for these requests)
RNET_VERIFY_SSL bool True Verify TLS certificates
RNET_PROXIES list[rnet.Proxy] None Proxy list; takes precedence over Scrapy's proxy settings

Choosing a browser profile

import rnet

# Chrome (default)
RNET_IMPERSONATE = rnet.Impersonate.Chrome131

# Firefox
RNET_IMPERSONATE = rnet.Impersonate.Firefox133

# Safari
RNET_IMPERSONATE = rnet.Impersonate.Safari18

# Pair with a specific OS fingerprint
RNET_IMPERSONATE    = rnet.Impersonate.Chrome131
RNET_IMPERSONATE_OS = rnet.ImpersonateOS.Windows

Full list of available profiles: rnet docs.

Proxies

Global proxy (all requests):

RNET_PROXIES = ["http://user:pass@proxy.example.com:8080"]

Per-request proxy via request.meta['proxy']:

yield scrapy.Request(url, meta={"proxy": "http://user:pass@proxy.example.com:8080"})

Important: Scrapy's built-in HttpProxyMiddleware strips credentials from the proxy URL before the download handler sees the request. Since rnet manages proxy auth internally, you must disable it:

DOWNLOADER_MIDDLEWARES = {
    "scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware": None,
}

How it works

RnetDownloadHandler implements Scrapy's download handler interface (download_request / close). When Scrapy resolves a request for an http or https URL it calls download_request, which:

  1. Translates the Scrapy Request (method, URL, headers, body) into an rnet call
  2. Sends the request through rnet's BoringSSL-backed async client
  3. Converts the rnet response back into the appropriate Scrapy response subclass (HtmlResponse, TextResponse, etc.), preserving status, headers, body, IP address, and HTTP protocol version

A single rnet.Client instance is shared for the spider's lifetime, so connection pooling works as normal.

Testing

# Unit tests only (no network required)
uv run pytest tests/ -m "not integration"

# All tests including real network calls
uv run pytest tests/

The integration suite includes a fingerprint verification test that hits tls.peet.ws and asserts the TLS/HTTP2 fingerprint matches Chrome 131.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy_impersonate_rnet-0.1.3.tar.gz (45.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scrapy_impersonate_rnet-0.1.3-py3-none-any.whl (6.1 kB view details)

Uploaded Python 3

File details

Details for the file scrapy_impersonate_rnet-0.1.3.tar.gz.

File metadata

File hashes

Hashes for scrapy_impersonate_rnet-0.1.3.tar.gz
Algorithm Hash digest
SHA256 7ce81d581198141ca06ff0d1e9995765349e559440570ee01705a1c14636765c
MD5 dd38bbe365ece41076ffed36975564d6
BLAKE2b-256 084b2cc5980da135ed7d4d7a24890577727049c66729bc41698e8bad82f3eb8f

See more details on using hashes here.

File details

Details for the file scrapy_impersonate_rnet-0.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for scrapy_impersonate_rnet-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 600afba5e5cd9eefe2926da73714b3f5dd710e4fad2165a2e79fbf2ca6ddce19
MD5 f5e0a9df0a9f54dca4466f8ed3d94fff
BLAKE2b-256 bfb7ed3a6304b5a959e02623408f1d64c1d73f7b84601f668857f7ab9d6935bf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page