Skip to main content

MongoDB-based components for Scrapy

Project description

# Scrapy MongoDB Queue MongoDB-based components for scrapy that allows distributed crawling

# Available Scrapy components: * Scheduler * Duplication Filter

Installation

From pypi:

$ pip install scrapy-mongodb-queue

From github:

$ git clone https://github.com/jbinfo/scrapy-mongodb-queue.git
$ cd scrapy-mongodb-queue
$ python setup.py install

Usage

Enable the components in your settings.py:

# Enables scheduling storing requests queue in redis.
SCHEDULER = "scrapy_mongodb_queue.scheduler.Scheduler"

# Don't cleanup mongodb queues, allows to pause/resume crawls.
MONGODB_QUEUE_PERSIST = True

# Specify the host and port to use when connecting to Redis (optional).
MONGODB_SERVER = 'localhost'
MONGODB_PORT = 27017
MONGODB_DB = "my_db"

# MongoDB collection name
MONGODB_QUEUE_NAME = "my_queue"

Author

This project is maintained by Lhassan Baazzi ([GitHub](https://github.com/jbinfo) | [Twitter](https://twitter.com/baazzilhassan) | [LinkedIn](https://ma.linkedin.com/pub/lhassan-baazzi/49/606/a70))

Project details


Release history Release notifications

This version
History Node

0.1.0

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging CloudAMQP CloudAMQP RabbitMQ AWS AWS Cloud computing Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page