Rotating proxies for Scrapy
Project description
scrapy-rotating-proxies
This package provides a Scrapy middleware to use rotating proxies, check that they are alive and adjust crawling speed.
License is MIT.
Installation
pip install scrapy-rotating-proxies
Usage
Add ROTATING_PROXY_LIST option with a list of proxies to settings.py:
ROTATING_PROXY_LIST = [ 'proxy1.com:8000', 'proxy2.com:8031', # ... ]
You can load it from file if needed:
def load_lines(path): with open(path, 'rb') as f: return [line.strip() for line in f.read().decode('utf8').splitlines() if line.strip()] ROTATING_PROXY_LIST = load_lines('/my/path/proxies.txt')
Then add rotating_proxies middlewares to your DOWNLOADER_MIDDLEWARES:
DOWNLOADER_MIDDLEWARES = { # ... 'rotating_proxies.middlewares.RotatingProxyMiddleware': 610, 'rotating_proxies.middlewares.BanDetectionMiddleware': 620, # ... }
Concurrency
By default, all default Scrapy concurrency options (DOWNLOAD_DELAY, AUTHTHROTTLE_..., CONCURRENT_REQUESTS_PER_DOMAIN, etc) become per-proxy for proxied requests when RotatingProxyMiddleware is enabled. For example, if you set CONCURRENT_REQUESTS_PER_DOMAIN=2 then spider will be making at most 2 concurrent connections to each proxy, regardless of request url domain.
Customization
scrapy-rotating-proxies keeps track of working and non-working proxies, and re-checks non-working from time to time.
Detection of a non-working proxy is site-specific. By default, scrapy-rotating-proxies uses a simple heuristic: if a response status code is not 200, response body is empty or if there was an exception then proxy is considered dead. To customize this with site-specific rules define response_is_ban and/or exception_is_ban spider methods:
class MySpider(scrapy.spider): # ... def response_is_ban(self, request, response): return b'banned' in response.body def exception_is_ban(self, request, exception): return None
It is important to have these rules correct because action for a failed request and a bad proxy should be different: if it is a proxy to blame it makes sense to retry the request with a different proxy.
Non-working proxies could become alive again after some time. scrapy-rotating-proxies uses a randomized exponential backoff for these checks - first check happens soon, if it still fails then next check is delayed further, etc. Use ROTATING_PROXY_BACKOFF_BASE to adjust the initial delay (by default it is random, from0 to 5 minutes).
Settings
ROTATING_PROXY_LIST - a list of proxies to choose from;
ROTATING_PROXY_LOGSTATS_INTERVAL - stats logging interval in seconds, 30 by default;
ROTATING_PROXY_CLOSE_SPIDER - When True, spider is stopped if there are no alive proxies. If False (default), then when there is no alive proxies all dead proxies are re-checked.
ROTATING_PROXY_PAGE_RETRY_TIMES - a number of times to retry downloading a page using a different proxy. After this amount of retries failure is considered a page failure, not a proxy failure. Default: 15.
ROTATING_PROXY_BACKOFF_BASE - base backoff time, in seconds. Default is 300 (i.e. 5 min).
Contributing
source code: https://github.com/TeamHG-Memex/scrapy-rotating-proxies
bug tracker: https://github.com/TeamHG-Memex/scrapy-rotating-proxies/issues
To run tests, install tox and run tox from the source checkout.
CHANGES
0.2 (2016-02-07)
improved default ban detection rules;
log ban stats.
0.1 (2016-02-01)
Initial release
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file scrapy-rotating-proxies-0.2.tar.gz
.
File metadata
- Download URL: scrapy-rotating-proxies-0.2.tar.gz
- Upload date:
- Size: 7.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 78f48f9f25df2d711da3d18981f8da368735f67b94230b202766fc20014c7e04 |
|
MD5 | 06396cd9210a688937d76805b155a9cb |
|
BLAKE2b-256 | 42aecb41afd2688eb263fe43aa2555cd038a6df9897b1deccea3b259a16f9704 |
File details
Details for the file scrapy_rotating_proxies-0.2-py2.py3-none-any.whl
.
File metadata
- Download URL: scrapy_rotating_proxies-0.2-py2.py3-none-any.whl
- Upload date:
- Size: 10.7 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 43590b213d7e700910a75c591c61676b9b82002113238580d6c3aa9b4dc48239 |
|
MD5 | da81db6533c4e196cab44d0c3ab6f94c |
|
BLAKE2b-256 | bb984b645c6f0b85974b30209d97a6b4ae7ce79d223ce5e13d29c5a5be6a492e |