Scrapy for Request Queue
As you know, Scrapy is a very popular python crawler framework. It is suit for "focused crawl", start from several URLs of specific sites, fetch html, extract and save "structured data" also with patternd links to crawl recursively. But for large scale, long time crawling especially "broad crawls", scrapy is incompetent. Basically, you have to decouple the whole crawling system into several sub-systems, high-performance full-featured distributed fetcher, task scheduler, html extractor, link database, data storage, proxy and a lot of auxiliary equipments. It will be more complex when your system is for multi-tenancy.
The os-rq-scrapy and os-rq-pod project are basic components to build "broad crawls" system. The core conceptions are very simple, os-rq-pod is multi-sites request queue have http api to recieve requests. os-rq-scrapy is fetcher, getting reqests from os-rq-pod and crawl multi-sites concurrently. os-rq-hub can also be used to connect multi pod and scrapy instances to work simultaneously.
- Python 3.6+ (pypy3.6+)
- Scrapy 2.0
- ujson, for json performance
pip install os-rq-scrapy
rq-scrapy command enhance the basic
scrapy command. When RQ_API is configured, the
crawl subcommand will run on rq mode, get requests from rq.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size os-rq-scrapy-0.0.6.tar.gz (13.9 kB)||File type Source||Python version None||Upload date||Hashes View|