A series distributed components for Scrapy framework
Project description
Scrapy-Distributed
Scrapy-Distributed
is a series of components for you to develop a distributed crawler base on Scrapy
in an easy way.
Now! Scrapy-Distributed
has supported RabbitMQ Scheduler
, Kafka Scheduler
and RedisBloom DupeFilter
. You can use either of those in your Scrapy's project very easily.
Features
- RabbitMQ Scheduler
- Support custom declare a RabbitMQ's Queue. Such as
passive
,durable
,exclusive
,auto_delete
, and all other options.
- Support custom declare a RabbitMQ's Queue. Such as
- Kafaka Scheduler
- Support custom declare a Kafka's Topic. Such as
num_partitions
,replication_factor
and will support other options.
- Support custom declare a Kafka's Topic. Such as
- RedisBloom DupeFilter
- Support custom the
key
,errorRate
,capacity
,expansion
and auto-scaling(noScale
) of a bloom filter.
- Support custom the
Requirements
- Python >= 3.6
- Scrapy >= 1.8.0
- Pika >= 1.0.0
- RedisBloom >= 0.2.0
- Redis >= 3.0.1
- kafka-python >= 1.4.7
Usage
There is a simple demo in examples/simple_example
. Here is the fast way to use Scrapy-Distributed
.
Step 0:
pip install scrapy-distributed
OR
git clone https://github.com/Insutanto/scrapy-distributed.git && cd scrapy-distributed
&& python setup.py install
RabbitMQ Support
If you don't have the required environment for tests:
# pull and run a RabbitMQ container.
docker run -d --name rabbitmq -p 0.0.0.0:15672:15672 -p 0.0.0.0:5672:5672 rabbitmq:3
# pull and run a RedisBloom container.
docker run -d --name redis-redisbloom -p 6379:6379 redislabs/rebloom:latest
Step 1:
Only by change SCHEDULER
, DUPEFILTER_CLASS
and add some configs, you can get a distributed crawler in a moment.
SCHEDULER = "scrapy_distributed.schedulers.DistributedScheduler"
SCHEDULER_QUEUE_CLASS = "scrapy_distributed.queues.amqp.RabbitQueue"
RABBITMQ_CONNECTION_PARAMETERS = "amqp://guest:guest@localhost:5672/example/?heartbeat=0"
DUPEFILTER_CLASS = "scrapy_distributed.dupefilters.redis_bloom.RedisBloomDupeFilter"
BLOOM_DUPEFILTER_REDIS_URL = "redis://:@localhost:6379/0"
BLOOM_DUPEFILTER_REDIS_HOST = "localhost"
BLOOM_DUPEFILTER_REDIS_PORT = 6379
REDIS_BLOOM_PARAMS = {
"redis_cls": "redisbloom.client.Client"
}
BLOOM_DUPEFILTER_ERROR_RATE = 0.001
BLOOM_DUPEFILTER_CAPACITY = 100_0000
Step 2:
scrapy crawl <your_spider>
Kafka Support
Step 1:
SCHEDULER = "scrapy_distributed.schedulers.DistributedScheduler"
SCHEDULER_QUEUE_CLASS = "scrapy_distributed.queues.kafka.KafkaQueue"
KAFKA_CONNECTION_PARAMETERS = "localhost:9092"
DUPEFILTER_CLASS = "scrapy_distributed.dupefilters.redis_bloom.RedisBloomDupeFilter"
BLOOM_DUPEFILTER_REDIS_URL = "redis://:@localhost:6379/0"
BLOOM_DUPEFILTER_REDIS_HOST = "localhost"
BLOOM_DUPEFILTER_REDIS_PORT = 6379
REDIS_BLOOM_PARAMS = {
"redis_cls": "redisbloom.client.Client"
}
BLOOM_DUPEFILTER_ERROR_RATE = 0.001
BLOOM_DUPEFILTER_CAPACITY = 100_0000
Step 2:
scrapy crawl <your_spider>
TODO
- RabbitMQ Item Pipeline
- Support Delayed Message in RabbitMQ Scheduler
- Support Scheduler Serializer
- Custom Interface for DupeFilter
- RocketMQ Scheduler
- RocketMQ Item Pipeline
Kafka Scheduler- Kafka Item Pipeline
Reference Project
scrapy-rabbitmq-link
(scrapy-rabbitmq-link)
scrapy-redis
(scrapy-redis)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for Scrapy-Distributed-2020.11.2.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | c847a3eb36b363d71b8788bcd96f3a5c0475c6d811ac447755c96a0611366e1b |
|
MD5 | e644538056dc302425e91450fe735ba0 |
|
BLAKE2b-256 | 25064517cbe8d9e351550a17fbd97db8bd3e9c1e59a95e7c32d3e5f606e96663 |
Hashes for Scrapy_Distributed-2020.11.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | d0885b3c2f56dc31cabdd666ac5e08df151ce97b9da46ca7bf1d3c99427967a0 |
|
MD5 | f76baa9f1afce8536aa10cabdee0197f |
|
BLAKE2b-256 | d33a415bbdfdd682dfef682f2e3ed09c485f2da6528a405fa5d880f90270eb10 |