Skip to main content

Rotating proxies for Scrapy

Project description

scrapy-rotating-proxies

PyPI Version Build Status Code Coverage

This package provides a Scrapy middleware to use rotating proxies, check that they are alive and adjust crawling speed.

License is MIT.

Installation

pip install scrapy-rotating-proxies

Usage

Add ROTATING_PROXY_LIST option with a list of proxies to settings.py:

ROTATING_PROXY_LIST = [
   'proxy1.com:8000',
   'proxy2.com:8031',
   # ...
]

You can load it from file if needed:

def load_lines(path):
   with open(path, 'rb') as f:
      return [line.strip() for line in
              f.read().decode('utf8').splitlines()
              if line.strip()]

ROTATING_PROXY_LIST = load_lines('/my/path/proxies.txt')

Then add rotating_proxies middlewares to your DOWNLOADER_MIDDLEWARES:

DOWNLOADER_MIDDLEWARES = {
   # ...
   'rotating_proxies.middlewares.RotatingProxyMiddleware': 610,
   'rotating_proxies.middlewares.BanDetectionMiddleware': 620,
   # ...
}

Concurrency

By default, all default Scrapy concurrency options (DOWNLOAD_DELAY, AUTHTHROTTLE_..., CONCURRENT_REQUESTS_PER_DOMAIN, etc) become per-proxy for proxied requests when RotatingProxyMiddleware is enabled. For example, if you set CONCURRENT_REQUESTS_PER_DOMAIN=2 then spider will be making at most 2 concurrent connections to each proxy, regardless of request url domain.

Customization

scrapy-rotating-proxies keeps track of working and non-working proxies, and re-checks non-working from time to time.

Detection of a non-working proxy is site-specific. By default, scrapy-rotating-proxies uses a simple heuristic: if a response status code is not 200, response body is empty or if there was an exception then proxy is considered dead. To customize this with site-specific rules define response_is_ban and/or exception_is_ban spider methods:

class MySpider(scrapy.spider):
   # ...

   def response_is_ban(self, request, response):
      return b'banned' in response.body

   def exception_is_ban(self, request, exception):
      return None

It is important to have these rules correct because action for a failed request and a bad proxy should be different: if it is a proxy to blame it makes sense to retry the request with a different proxy.

Non-working proxies could become alive again after some time. scrapy-rotating-proxies uses a randomized exponential backoff for these checks - first check happens soon, if it still fails then next check is delayed further, etc. Use ROTATING_PROXY_BACKOFF_BASE to adjust the initial delay (by default it is random, from0 to 5 minutes).

Settings

  • ROTATING_PROXY_LIST - a list of proxies to choose from;

  • ROTATING_PROXY_LOGSTATS_INTERVAL - stats logging interval in seconds, 30 by default;

  • ROTATING_PROXY_CLOSE_SPIDER - When True, spider is stopped if there are no alive proxies. If False (default), then when there is no alive proxies all dead proxies are re-checked.

  • ROTATING_PROXY_PAGE_RETRY_TIMES - a number of times to retry downloading a page using a different proxy. After this amount of retries failure is considered a page failure, not a proxy failure. Think of it this way: every improperly detected ban cost you ROTATING_PROXY_PAGE_RETRY_TIMES alive proxies. Default: 5.

  • ROTATING_PROXY_BACKOFF_BASE - base backoff time, in seconds. Default is 300 (i.e. 5 min).

FAQ

Q: Where to get proxy lists? How to write and maintain ban rules?

A: It is up to you to find proxies and maintain proper ban rules for web sites; scrapy-rotating-proxies doesn’t have anything built-in. There are commercial proxy services like https://crawlera.com/ which can integrate with Scrapy (see https://github.com/scrapy-plugins/scrapy-crawlera) and take care of all these details.

Contributing

To run tests, install tox and run tox from the source checkout.

CHANGES

0.2.2 (2017-03-01)

  • Update default ban detection rules: scrapy.exceptions.IgnoreRequest is not a ban.

0.2.1 (2017-02-08)

  • changed ROTATING_PROXY_PAGE_RETRY_TIMES default value - it is now 5.

0.2 (2017-02-07)

  • improved default ban detection rules;

  • log ban stats.

0.1 (2017-02-01)

Initial release

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy-rotating-proxies-0.2.2.tar.gz (8.3 kB view details)

Uploaded Source

Built Distribution

scrapy_rotating_proxies-0.2.2-py2.py3-none-any.whl (11.4 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file scrapy-rotating-proxies-0.2.2.tar.gz.

File metadata

File hashes

Hashes for scrapy-rotating-proxies-0.2.2.tar.gz
Algorithm Hash digest
SHA256 3e9e906da907ecaf25d666e6d41663f2433673cec1928c47b5c60a6f749620b4
MD5 65ac9644d5501cb1179583195f4c006a
BLAKE2b-256 7440990397b46714ed0cd2b43eccb38e389b12f0dd41cc174ae6b7b7c6420690

See more details on using hashes here.

File details

Details for the file scrapy_rotating_proxies-0.2.2-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for scrapy_rotating_proxies-0.2.2-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 fe549b442661de2b9086f1bb115bddbb9b52679502a6003de441dd8fdd5c9897
MD5 4bbff48b0652d1bba48d263224795338
BLAKE2b-256 6d470fc583c6cf9683170cb5cdf92742e5a3feac12c09c8adfc617ef86320b98

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page