Skip to main content
This is a pre-production deployment of Warehouse. Changes made here affect the production instance of PyPI (pypi.python.org).
Help us improve Python packaging - Donate today!

MongoDB-based components for Scrapy

Project Description

# Scrapy MongoDB Queue MongoDB-based components for scrapy that allows distributed crawling

# Available Scrapy components: * Scheduler * Duplication Filter

Installation

From pypi:

$ pip install scrapy-mongodb-queue

From github:

$ git clone https://github.com/jbinfo/scrapy-mongodb-queue.git
$ cd scrapy-mongodb-queue
$ python setup.py install

Usage

Enable the components in your settings.py:

# Enables scheduling storing requests queue in redis.
SCHEDULER = "scrapy_mongodb_queue.scheduler.Scheduler"

# Don't cleanup mongodb queues, allows to pause/resume crawls.
MONGODB_QUEUE_PERSIST = True

# Specify the host and port to use when connecting to Redis (optional).
MONGODB_SERVER = 'localhost'
MONGODB_PORT = 27017
MONGODB_DB = "my_db"

# MongoDB collection name
MONGODB_QUEUE_NAME = "my_queue"

Author

This project is maintained by Lhassan Baazzi ([GitHub](https://github.com/jbinfo) | [Twitter](https://twitter.com/baazzilhassan) | [LinkedIn](https://ma.linkedin.com/pub/lhassan-baazzi/49/606/a70))

Release History

Release History

This version
History Node

0.1.0

Supported By

WebFaction WebFaction Technical Writing Elastic Elastic Search Pingdom Pingdom Monitoring Dyn Dyn DNS Sentry Sentry Error Logging CloudAMQP CloudAMQP RabbitMQ Heroku Heroku PaaS Kabu Creative Kabu Creative UX & Design Fastly Fastly CDN DigiCert DigiCert EV Certificate Rackspace Rackspace Cloud Servers DreamHost DreamHost Log Hosting