Skip to main content

Redis-based components for Scrapy.

Project description

Scrapy-Redis

Documentation Status https://img.shields.io/pypi/v/scrapy-redis.svg https://img.shields.io/pypi/pyversions/scrapy-redis.svg https://img.shields.io/travis/rolando/scrapy-redis.svg Coverage Status Code Quality Status Requirements Status

Redis-based components for Scrapy.

Features

  • Distributed crawling/scraping

    You can start multiple spider instances that share a single redis queue. Best suitable for broad multi-domain crawls.

  • Distributed post-processing

    Scraped items gets pushed into a redis queued meaning that you can start as many as needed post-processing processes sharing the items queue.

  • Scrapy plug-and-play components

    Scheduler + Duplication Filter, Item Pipeline, Base Spiders.

Requirements

  • Python 2.7, 3.4 or 3.5

  • Redis >= 2.8

  • Scrapy >= 1.1

  • redis-py >= 2.10

Usage

Use the following settings in your project:

# Enables scheduling storing requests queue in redis.
SCHEDULER = "scrapy_redis.scheduler.Scheduler"

# Ensure all spiders share same duplicates filter through redis.
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"

# Enables stats shared based on Redis
STATS_CLASS = "scrapy_redis.stats.RedisStatsCollector"

# Default requests serializer is pickle, but it can be changed to any module
# with loads and dumps functions. Note that pickle is not compatible between
# python versions.
# Caveat: In python 3.x, the serializer must return strings keys and support
# bytes as values. Because of this reason the json or msgpack module will not
# work by default. In python 2.x there is no such issue and you can use
# 'json' or 'msgpack' as serializers.
#SCHEDULER_SERIALIZER = "scrapy_redis.picklecompat"

# Don't cleanup redis queues, allows to pause/resume crawls.
#SCHEDULER_PERSIST = True

# Schedule requests using a priority queue. (default)
#SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.PriorityQueue'

# Alternative queues.
#SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.FifoQueue'
#SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.LifoQueue'

# Max idle time to prevent the spider from being closed when distributed crawling.
# This only works if queue class is SpiderQueue or SpiderStack,
# and may also block the same time when your spider start at the first time (because the queue is empty).
#SCHEDULER_IDLE_BEFORE_CLOSE = 10

# Maximum idle time before close spider.
# When the number of idle seconds is greater than MAX_IDLE_TIME_BEFORE_CLOSE, the crawler will close.
# If 0, the crawler will DontClose forever to wait for the next request.
# If negative number, the crawler will immediately close when the queue is empty, just like Scrapy.
#MAX_IDLE_TIME_BEFORE_CLOSE = 0

# Store scraped item in redis for post-processing.
ITEM_PIPELINES = {
    'scrapy_redis.pipelines.RedisPipeline': 300
}

# The item pipeline serializes and stores the items in this redis key.
#REDIS_ITEMS_KEY = '%(spider)s:items'

# The items serializer is by default ScrapyJSONEncoder. You can use any
# importable path to a callable object.
#REDIS_ITEMS_SERIALIZER = 'json.dumps'

# Specify the host and port to use when connecting to Redis (optional).
#REDIS_HOST = 'localhost'
#REDIS_PORT = 6379

# Specify the full Redis URL for connecting (optional).
# If set, this takes precedence over the REDIS_HOST and REDIS_PORT settings.
#REDIS_URL = 'redis://user:pass@hostname:9001'

# Custom redis client parameters (i.e.: socket timeout, etc.)
#REDIS_PARAMS  = {}
# Use custom redis client class.
#REDIS_PARAMS['redis_cls'] = 'myproject.RedisClient'

# If True, it uses redis' ``SPOP`` operation. You have to use the ``SADD``
# command to add URLs to the redis queue. This could be useful if you
# want to avoid duplicates in your start urls list and the order of
# processing does not matter.
#REDIS_START_URLS_AS_SET = False

# If True, it uses redis ``zrevrange`` and ``zremrangebyrank`` operation. You have to use the ``zadd``
# command to add URLS and Scores to redis queue. This could be useful if you
# want to use priority and avoid duplicates in your start urls list.
#REDIS_START_URLS_AS_ZSET = False

# Default start urls key for RedisSpider and RedisCrawlSpider.
#REDIS_START_URLS_KEY = '%(name)s:start_urls'

# Use other encoding than utf-8 for redis.
#REDIS_ENCODING = 'latin1'

Running the example project

This example illustrates how to share a spider’s requests queue across multiple spider instances, highly suitable for broad crawls.

  1. Setup scrapy_redis package in your PYTHONPATH

  2. Run the crawler for first time then stop it:

    $ cd example-project
    $ scrapy crawl dmoz
    ... [dmoz] ...
    ^C
  3. Run the crawler again to resume stopped crawling:

    $ scrapy crawl dmoz
    ... [dmoz] DEBUG: Resuming crawl (9019 requests scheduled)
  4. Start one or more additional scrapy crawlers:

    $ scrapy crawl dmoz
    ... [dmoz] DEBUG: Resuming crawl (8712 requests scheduled)
  5. Start one or more post-processing workers:

    $ python process_items.py dmoz:items -v
    ...
    Processing: Kilani Giftware (http://www.dmoz.org/Computers/Shopping/Gifts/)
    Processing: NinjaGizmos.com (http://www.dmoz.org/Computers/Shopping/Gifts/)
    ...

Feeding a Spider from Redis

The class scrapy_redis.spiders.RedisSpider enables a spider to read the urls from redis. The urls in the redis queue will be processed one after another, if the first request yields more requests, the spider will process those requests before fetching another url from redis.

For example, create a file myspider.py with the code below:

from scrapy_redis.spiders import RedisSpider

class MySpider(RedisSpider):
    name = 'myspider'

    def parse(self, response):
        # do stuff
        pass

Then:

  1. run the spider:

    scrapy runspider myspider.py
  2. push urls to redis:

    redis-cli lpush myspider:start_urls http://google.com

Contributions

Donate BTC: 13haqimDV7HbGWtz7uC6wP1zvsRWRAhPmF

Donate BCC: CSogMjdfPZnKf1p5ocu3gLR54Pa8M42zZM

Donate ETH: 0x681d9c8a2a3ff0b612ab76564e7dca3f2ccc1c0d

Donate LTC: LaPHpNS1Lns3rhZSvvkauWGDfCmDLKT8vP

History

0.7.1 (2021-03-27)

  • Fixes datetime parse error for redis-py 3.x.

  • Add support for stats extensions.

0.7.1-rc1 (2021-03-27)

  • Fixes datetime parse error for redis-py 3.x.

0.7.1-b1 (2021-03-22)

  • Add support for stats extensions.

0.7.0-dev (unreleased)

  • Unreleased.

0.6.8 (2017-02-14)

  • Fixed automated release due to not matching registered email.

0.6.7 (2016-12-27)

  • Fixes bad formatting in logging message.

0.6.6 (2016-12-20)

  • Fixes wrong message on dupefilter duplicates.

0.6.5 (2016-12-19)

  • Fixed typo in default settings.

0.6.4 (2016-12-18)

  • Fixed data decoding in Python 3.x.

  • Added REDIS_ENCODING setting (default utf-8).

  • Default to CONCURRENT_REQUESTS value for REDIS_START_URLS_BATCH_SIZE.

  • Renamed queue classes to a proper naming conventiong (backwards compatible).

0.6.3 (2016-07-03)

  • Added REDIS_START_URLS_KEY setting.

  • Fixed spider method from_crawler signature.

0.6.2 (2016-06-26)

  • Support redis_cls parameter in REDIS_PARAMS setting.

  • Python 3.x compatibility fixed.

  • Added SCHEDULER_SERIALIZER setting.

0.6.1 (2016-06-25)

  • Backwards incompatible change: Require explicit DUPEFILTER_CLASS setting.

  • Added SCHEDULER_FLUSH_ON_START setting.

  • Added REDIS_START_URLS_AS_SET setting.

  • Added REDIS_ITEMS_KEY setting.

  • Added REDIS_ITEMS_SERIALIZER setting.

  • Added REDIS_PARAMS setting.

  • Added REDIS_START_URLS_BATCH_SIZE spider attribute to read start urls in batches.

  • Added RedisCrawlSpider.

0.6.0 (2015-07-05)

  • Updated code to be compatible with Scrapy 1.0.

  • Added -a domain=… option for example spiders.

0.5.0 (2013-09-02)

  • Added REDIS_URL setting to support Redis connection string.

  • Added SCHEDULER_IDLE_BEFORE_CLOSE setting to prevent the spider closing too quickly when the queue is empty. Default value is zero keeping the previous behavior.

  • Schedule preemptively requests on item scraped.

  • This version is the latest release compatible with Scrapy 0.24.x.

0.4.0 (2013-04-19)

  • Added RedisSpider and RedisMixin classes as building blocks for spiders to be fed through a redis queue.

  • Added redis queue stats.

  • Let the encoder handle the item as it comes instead converting it to a dict.

0.3.0 (2013-02-18)

  • Added support for different queue classes.

  • Changed requests serialization from marshal to cPickle.

0.2.0 (2013-02-17)

  • Improved backward compatibility.

  • Added example project.

0.1.0 (2011-09-01)

  • First release on PyPI.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

deepctrl-scrapy-redis-0.8.5.tar.gz (41.8 kB view details)

Uploaded Source

Built Distribution

deepctrl_scrapy_redis-0.8.5-py2.py3-none-any.whl (20.1 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file deepctrl-scrapy-redis-0.8.5.tar.gz.

File metadata

  • Download URL: deepctrl-scrapy-redis-0.8.5.tar.gz
  • Upload date:
  • Size: 41.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.1 pkginfo/1.7.0 requests/2.25.0 requests-toolbelt/0.9.1 tqdm/4.54.1 CPython/3.9.1

File hashes

Hashes for deepctrl-scrapy-redis-0.8.5.tar.gz
Algorithm Hash digest
SHA256 c0510a3e5cecb8e23eaba440eb7c55329623a2fdf1e23ad72a7a4a046a770518
MD5 85b6b3c698a997f282501b8df11c60c6
BLAKE2b-256 b64c772b81deb0f54a70340b46d7443b5b96f071ce386bed65605ae720c648de

See more details on using hashes here.

File details

Details for the file deepctrl_scrapy_redis-0.8.5-py2.py3-none-any.whl.

File metadata

  • Download URL: deepctrl_scrapy_redis-0.8.5-py2.py3-none-any.whl
  • Upload date:
  • Size: 20.1 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.1 pkginfo/1.7.0 requests/2.25.0 requests-toolbelt/0.9.1 tqdm/4.54.1 CPython/3.9.1

File hashes

Hashes for deepctrl_scrapy_redis-0.8.5-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 ff11da7fe2c8bf75ee542e78dccd9d991eb84ca669b5319d166b9611177bf347
MD5 87eb08176115d5d39e80789883cf3dae
BLAKE2b-256 b99c772d59ac15e4e44a0f611f83b743f957fba72aaff6cf0866f53491c403ed

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page