Skip to main content

Redis-based components for Scrapy

Project description

This is a initial work on Scrapy-Redis integration, not production-tested. Use it at your own risk!


  • Distributed crawling/scraping
  • Distributed post-processing


  • Scrapy >= 0.13 (development version)
  • redis-py (tested on 2.4.9)
  • redis server (tested on 2.2-2.4)

Available Scrapy components:

  • Scheduler
  • Duplication Filter
  • Item Pipeline
  • Base Spider


From pypi:

$ pip install scrapy-redis

From github:

$ git clone
$ cd scrapy-redis
$ python install


Enable the components in your

# enables scheduling storing requests queue in redis
SCHEDULER = "scrapy_redis.scheduler.Scheduler"

# don't cleanup redis queues, allows to pause/resume crawls

# Schedule requests using a priority queue. (default)
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderPriorityQueue'

# Schedule requests using a queue (FIFO).
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderQueue'

# Schedule requests using a stack (LIFO).
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderStack'

# store scraped item in redis for post-processing


Version 0.3 changed the requests serialization from marshal to cPickle, therefore persisted requests using version 0.2 will not able to work on 0.3.

Running the example project

This example illustrates how to share a spider’s requests queue across multiple spider instances, highly suitable for broad crawls.

  1. Setup scrapy_redis package in your PYTHONPATH

  2. Run the crawler for first time then stop it:

    $ cd example-project
    $ scrapy crawl dmoz
    ... [dmoz] ...
  3. Run the crawler again to resume stopped crawling:

    $ scrapy crawl dmoz
    ... [dmoz] DEBUG: Resuming crawl (9019 requests scheduled)
  4. Start one or more additional scrapy crawlers:

    $ scrapy crawl dmoz
    ... [dmoz] DEBUG: Resuming crawl (8712 requests scheduled)
  5. Start one or more post-processing workers:

    $ python
    Processing: Kilani Giftware (
    Processing: (

Feeding a Spider from Redis

The class scrapy_redis.spiders.RedisSpider enables a spider to read the urls from redis. The urls in the redis queue will be processed one after another, if the first request yields more requests, the spider will process those requests before fetching another url from redis.

For example, create a file with the code below:

from scrapy_redis.spiders import RedisSpider

class MySpider(RedisSpider):
    name = 'myspider'

    def parse(self, response):
        # do stuff


  1. run the spider:

    scrapy runspider
  2. push urls to redis:

    redis-cli lpush myspider:start_urls
Bitdeli badge

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for scrapy-redis, version 0.4
Filename, size File type Python version Upload date Hashes
Filename, size scrapy-redis-0.4.tar.gz (6.8 kB) File type Source Python version None Upload date Hashes View

Supported by

Pingdom Pingdom Monitoring Google Google Object Storage and Download Analytics Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page