Stealthy Crawling. Maximum Results. A pluggable anti-bot and stealth framework for Scrapy.
Project description
scrapy-stealth
Stealthy Crawling. Maximum Results.
A pluggable anti-bot and stealth framework for Scrapy.
scrapy-stealth extends Scrapy with browser impersonation, proxy rotation, fingerprint cycling, and intelligent retry strategies โ
designed for large-scale, production-grade crawling.
๐ง Why scrapy-stealth?
Scrapy is fast and powerful, but modern websites use advanced anti-bot protections such as:
- TLS fingerprinting
- Browser behavior detection
- Rate limiting and IP blocking
scrapy-stealth helps by adding:
- ๐งฌ Browser-level impersonation (TLS + HTTP/2 fingerprints)
- ๐ Smarter retry strategies
- ๐ Proxy and fingerprint rotation
- ๐ก๏ธ Anti-bot detection
Result
- Higher success rate
- Lower proxy cost
- More stable crawls
๐ Comparison
| Feature | scrapy-stealth | scrapy-playwright | scrapy-splash | scrapy-selenium | Scrapy (default) |
|---|---|---|---|---|---|
| TLS fingerprint spoofing | โ | โ | โ | โ | โ |
| HTTP/2 support | โ | โ | โ | โ | โ |
| Browser impersonation | โ | โ ๏ธ partial | โ | โ | โ |
| Proxy rotation (built-in) | โ | โ | โ | โ | โ |
| Fingerprint rotation | โ | โ | โ | โ | โ |
| Anti-bot detection | โ | โ | โ | โ | โ |
| Smart retry logic | โ | โ | โ | โ | โ |
| Per-request engine switching | โ | โ | โ | โ | โ |
| Headless browser required | โ | โ | โ | โ | โ |
| JavaScript rendering | โ | โ | โ | โ | โ |
| Native Scrapy integration | โ | โ | โ | โ ๏ธ partial | โ |
| Memory footprint | ๐ข Low | ๐ด High | ๐ด High | ๐ด High | ๐ข Low |
โ ๏ธ
scrapy-playwrightpasses real browser TLS but does not spoof fingerprint profiles likescrapy-stealthdoes.scrapy-stealthdoes not render JavaScript โ use it for APIs and HTML pages that don't require a full browser.
โจ Features
- ๐ Pluggable engine system (
scrapy,stealth) - ๐ง Per-request engine selection via
request.meta - ๐ Proxy support and rotation
- ๐งฌ Browser fingerprint rotation
- ๐ Smart retry logic
- ๐ก๏ธ Anti-bot detection (status + content-based, Cloudflare, Akamai)
- โก Thread-safe async integration
๐ฆ Installation
pip install scrapy-stealth
Requires Python 3.11+ and Scrapy 2.15+
โ๏ธ Setup
Option 1 โ Global (settings.py)
# 1. Enable the middleware
DOWNLOADER_MIDDLEWARES = {
"scrapy_stealth.middlewares.stealth.StealthDownloaderMiddleware": 950,
}
# 2. (Optional) Proxy list for automatic rotation
# Used when request.meta["stealth"]["rotate_proxy"] = True
# Supported schemes: http, https, socks4, socks5
# Each entry must include a scheme and port
STEALTH_PROXIES = [
"http://proxy1:8080",
"http://proxy2:8080",
"http://user:pass@proxy3:8080", # with authentication
"socks5://proxy4:1080",
]
Option 2 โ Per-spider (custom_settings)
Configure the middleware and proxies directly on the spider โ no changes to settings.py required.
Each spider can have its own independent proxy list.
class MySpider(scrapy.Spider):
name = "example"
custom_settings = {
"DOWNLOADER_MIDDLEWARES": {
"scrapy_stealth.middlewares.stealth.StealthDownloaderMiddleware": 950,
},
"STEALTH_PROXIES": [
"http://proxy1:8080",
"http://user:pass@proxy2:8080",
"socks5://proxy3:1080",
],
}
Proxies are validated at startup โ invalid format or unsupported scheme raises
ValueErrorimmediately.
๐ Quick Start
yield scrapy.Request(
url="https://example.com",
meta={"stealth": {}},
)
๐ง Global Configuration
Customise package-wide defaults via the shared config instance.
All settings must be applied at module level, before the spider class โ the engine client is
created at middleware initialisation, so changes inside start_requests or parse will have no effect.
# myspider.py
import scrapy
from scrapy_stealth.config import config
config.DEFAULT_ENGINE = "stealth" # "scrapy" (native) or "stealth" (browser impersonation)
config.DEFAULT_PROFILE = "chrome_147" # browser profile when meta["stealth"]["profile"] is not set
config.DEFAULT_TIMEOUT = 30 # stealth request timeout in seconds
config.STEALTH_DRIVER = "turbo" # "basic" (default) or "turbo" (deeper TLS fingerprinting)
config.HTTP2 = True # False for servers that only support HTTP/1.1
config.BLOCK_CODES |= {407} # extend blocked status codes (|= keeps defaults)
config.BLOCK_KEYWORDS.append("banned") # extend blocked body-text patterns
class MySpider(scrapy.Spider):
name = "example"
...
# โ wrong โ too late, the engine client is already created
class MySpider(scrapy.Spider):
def start_requests(self):
config.HTTP2 = False # has no effect
...
You can also read any value programmatically:
config.get("DEFAULT_ENGINE") # "scrapy"
config.get("MISSING_KEY", "default") # "default"
| Attribute | Type | Default | Description |
|---|---|---|---|
DEFAULT_ENGINE |
str |
"scrapy" |
Engine used when request.meta["stealth"] key is absent |
DEFAULT_PROFILE |
str |
"chrome_147" |
Browser profile used when none is specified |
DEFAULT_TIMEOUT |
int |
30 |
Request timeout in seconds |
STEALTH_DRIVER |
str |
"basic" |
Default driver for stealth engine: "basic" or "turbo" |
HTTP2 |
bool |
True |
HTTP/2 mode; overridable per-request via meta["stealth"]["http2"] |
BLOCK_CODES |
frozenset[int] |
{403, 429, 503} |
HTTP status codes considered blocked |
BLOCK_KEYWORDS |
list[str] |
["captcha", "access denied", โฆ] |
Body-text patterns considered blocked |
For one-off overrides on a single request, set meta["stealth"]["driver"] or meta["stealth"]["http2"] (see Per-Request Configuration below).
โ๏ธ Per-Request Configuration
All options are passed via request.meta["stealth"]:
The presence of meta["stealth"] activates the stealth engine. Omit the key entirely to use the default Scrapy engine.
yield scrapy.Request(
url,
meta={
"stealth": {
"driver": "turbo",
"profile": "chrome_147",
"proxy": "http://user:pass@proxy:8080",
"stealth_timeout": 60,
"http2": True,
"rotate_proxy": True,
"rotate_profile": True,
}
},
)
| Key | Type | Description |
|---|---|---|
driver |
str |
"basic" (default) or "turbo" โ overrides config.STEALTH_DRIVER per-request |
profile |
str |
Browser profile (e.g. "chrome_147", "safari_ios_18_1_1") |
proxy |
str |
Explicit proxy URL |
stealth_timeout |
int |
Per-request timeout in seconds (overrides default 30s) |
http2 |
bool |
True = HTTP/2, False = HTTP/1.1 (overrides config.HTTP2 for this request) |
rotate_proxy |
bool |
Auto-pick a proxy from STEALTH_PROXIES |
rotate_profile |
bool |
Auto-pick a random browser profile |
๐ Automatic Rotation
yield scrapy.Request(
url,
meta={
"stealth": {
"rotate_proxy": True,
"rotate_profile": True,
}
},
)
๐งฉ Strategies
Proxy Rotation
from scrapy_stealth.strategies.proxy import ProxyRotator
proxy_rotator = ProxyRotator([
"http://proxy1:8080",
"http://proxy2:8080",
])
yield scrapy.Request(
url,
meta={
"stealth": {
"proxy": proxy_rotator.get(),
}
},
)
Fingerprint Rotation
from scrapy_stealth.strategies.fingerprint import ProfileRotator
fp = ProfileRotator()
yield scrapy.Request(
url,
meta={
"stealth": {
"profile": fp.get(),
}
},
)
Intelligent Retry
from scrapy_stealth.strategies.retry import RetryHandler
retry = RetryHandler()
def parse(self, response):
if retry.should_retry(response):
yield retry.build(response.request)
return
๐ก๏ธ Anti-Bot Detection
from scrapy_stealth.detectors.antibot import AntiBotDetector
detector = AntiBotDetector()
if detector.is_blocked(response):
print("Blocked!")
๐ Example
import scrapy
class ExampleSpider(scrapy.Spider):
name = "example"
def start_requests(self):
yield scrapy.Request(
"https://example.com",
meta={
"stealth": {
"rotate_proxy": True,
"rotate_profile": True,
}
},
)
def parse(self, response):
yield {
"title": response.css("title::text").get(),
"url": response.url,
}
โก Performance Insight
Using stealth selectively:
- โก Faster crawling (Scrapy for simple pages)
- ๐ฐ Lower proxy cost
- ๐ก๏ธ Better success rate on protected pages
๐ Changelog
See CHANGELOG.md for a full history of changes, or browse GitHub Releases.
๐ค Contributing
See CONTRIBUTING.md for guidelines on how to contribute.
๐ License
This project is licensed under the MIT License โ free to use, modify, and distribute. See LICENSE for the full text.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file scrapy_stealth-0.4.0.tar.gz.
File metadata
- Download URL: scrapy_stealth-0.4.0.tar.gz
- Upload date:
- Size: 28.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
330e32ae82790a94896db602137b8984ded3ab0a75a1273e39298becf25951b0
|
|
| MD5 |
aca49e76315470276f52ab5a01a0b514
|
|
| BLAKE2b-256 |
f77adbcce75b688da38ea811875a3a8447b9a6b627dee6d9804a239a643cf63a
|
Provenance
The following attestation bundles were made for scrapy_stealth-0.4.0.tar.gz:
Publisher:
publish.yml on fawadss1/scrapy-stealth
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
scrapy_stealth-0.4.0.tar.gz -
Subject digest:
330e32ae82790a94896db602137b8984ded3ab0a75a1273e39298becf25951b0 - Sigstore transparency entry: 1449647413
- Sigstore integration time:
-
Permalink:
fawadss1/scrapy-stealth@1834817e6f23e012fdf18edead326069fab520d0 -
Branch / Tag:
refs/tags/v0.4.0 - Owner: https://github.com/fawadss1
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@1834817e6f23e012fdf18edead326069fab520d0 -
Trigger Event:
release
-
Statement type:
File details
Details for the file scrapy_stealth-0.4.0-py3-none-any.whl.
File metadata
- Download URL: scrapy_stealth-0.4.0-py3-none-any.whl
- Upload date:
- Size: 25.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d76e5d71f2ca4cf1af85a2a3422437ea6b4056725d8bcf7b0932dd23ada742b2
|
|
| MD5 |
61b6105c048770bde93743edc64f26b2
|
|
| BLAKE2b-256 |
59d308fc2fced74f69849253bcbca1ba69157541ca5c94a182b69838c132db83
|
Provenance
The following attestation bundles were made for scrapy_stealth-0.4.0-py3-none-any.whl:
Publisher:
publish.yml on fawadss1/scrapy-stealth
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
scrapy_stealth-0.4.0-py3-none-any.whl -
Subject digest:
d76e5d71f2ca4cf1af85a2a3422437ea6b4056725d8bcf7b0932dd23ada742b2 - Sigstore transparency entry: 1449647416
- Sigstore integration time:
-
Permalink:
fawadss1/scrapy-stealth@1834817e6f23e012fdf18edead326069fab520d0 -
Branch / Tag:
refs/tags/v0.4.0 - Owner: https://github.com/fawadss1
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@1834817e6f23e012fdf18edead326069fab520d0 -
Trigger Event:
release
-
Statement type: