A high-level Web Crawling and Web Scraping framework based on Asyncio
Project description
aio-scrapy
An asyncio + aiolibs crawler imitate scrapy framework
English | 中文
Overview
- aio-scrapy framework is base on opensource project Scrapy & scrapy_redis.
- aio-scrapy implements compatibility with scrapyd.
- aio-scrapy implements redis queue and rabbitmq queue.
- aio-scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages.
- Distributed crawling/scraping.
Requirements
- Python 3.9+
- Works on Linux, Windows, macOS, BSD
Install
The quick way:
# Install the latest aio-scrapy
pip install git+https://github.com/conlin-huang/aio-scrapy
# default
pip install aio-scrapy
# Install all dependencies
pip install aio-scrapy[all]
# When you need to use mysql/httpx/rabbitmq/mongo
pip install aio-scrapy[aiomysql,httpx,aio-pika,mongo]
Usage
create project spider:
aioscrapy startproject project_quotes
cd project_quotes
aioscrapy genspider quotes
quotes.py
from aioscrapy.spiders import Spider
class QuotesMemorySpider(Spider):
name = 'QuotesMemorySpider'
start_urls = ['https://quotes.toscrape.com']
async def parse(self, response):
for quote in response.css('div.quote'):
yield {
'author': quote.xpath('span/small/text()').get(),
'text': quote.css('span.text::text').get(),
}
next_page = response.css('li.next a::attr("href")').get()
if next_page is not None:
yield response.follow(next_page, self.parse)
if __name__ == '__main__':
QuotesMemorySpider.start()
run the spider:
aioscrapy crawl quotes
create single script spider:
aioscrapy genspider single_quotes -t single
single_quotes.py:
from aioscrapy.spiders import Spider
class QuotesMemorySpider(Spider):
name = 'QuotesMemorySpider'
custom_settings = {
"USER_AGENT": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36",
'CLOSE_SPIDER_ON_IDLE': True,
# 'DOWNLOAD_DELAY': 3,
# 'RANDOMIZE_DOWNLOAD_DELAY': True,
# 'CONCURRENT_REQUESTS': 1,
# 'LOG_LEVEL': 'INFO'
}
start_urls = ['https://quotes.toscrape.com']
@staticmethod
async def process_request(request, spider):
""" request middleware """
pass
@staticmethod
async def process_response(request, response, spider):
""" response middleware """
return response
@staticmethod
async def process_exception(request, exception, spider):
""" exception middleware """
pass
async def parse(self, response):
for quote in response.css('div.quote'):
yield {
'author': quote.xpath('span/small/text()').get(),
'text': quote.css('span.text::text').get(),
}
next_page = response.css('li.next a::attr("href")').get()
if next_page is not None:
yield response.follow(next_page, self.parse)
async def process_item(self, item):
print(item)
if __name__ == '__main__':
QuotesMemorySpider.start()
run the spider:
aioscrapy runspider quotes.py
more commands:
aioscrapy -h
Documentation
Ready
please submit your sugguestion to owner by issue
Thanks
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
aio-scrapy-2.0.3.tar.gz
(94.8 kB
view details)
Built Distribution
aio_scrapy-2.0.3-py3-none-any.whl
(137.2 kB
view details)
File details
Details for the file aio-scrapy-2.0.3.tar.gz
.
File metadata
- Download URL: aio-scrapy-2.0.3.tar.gz
- Upload date:
- Size: 94.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 369011d3f00c99414c2edadc7cfbee6b95fe1b83f3e086debdc83ed0fc94d8d4 |
|
MD5 | abef28e986752744949e7336334921da |
|
BLAKE2b-256 | d2f4d980f7e8c712127bc3a354d3618ebbdf72ca2dfcf46c4c12ef1b0fdbe8a9 |
File details
Details for the file aio_scrapy-2.0.3-py3-none-any.whl
.
File metadata
- Download URL: aio_scrapy-2.0.3-py3-none-any.whl
- Upload date:
- Size: 137.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 34694dcab24846ed7273f30787621765dfb4cf969759c833aafc81f05c99bb4e |
|
MD5 | 6b7b9ae00666c7b4dcb1c2d5b96451d1 |
|
BLAKE2b-256 | ce4591d96600085e4ed103ce7d1dd6f193efae02b5ef5d0cb4b1c32e07a6d711 |