Simple website crawler built with Python's asyncio
Project description
crawlio
Simple website crawler built with Python's asyncio
Features
- Asynchronous "deep" crawling using
asyncio
,aiohttp
andParsel
(by Scrapy authors) - Zero-configuration
- Customizable XPath selectors
Setup
pip install crawlio
Usage
Synchronous ()
import asyncio
from crawlio import Crawler
fields = {
'title': '/html/head/title/text()',
# ...
}
crawler = Crawler('https://quotes.toscrape.com/', selectors=fields)
results = asyncio.run(crawler.run(), debug=True)
for item in results:
print(item)
Asynchronous
import asyncio
from crawlio import Crawler
async def some_coroutine():
fields = {
'title': '/html/head/title/text()',
# ...
}
loop = asyncio.get_event_loop()
crawler = Crawler('https://quotes.toscrape.com/', selectors=fields)
results = await crawler.run()
return results
Contribute
...
License
...
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
crawlio-1.1.0.tar.gz
(16.7 kB
view hashes)
Built Distribution
crawlio-1.1.0-py3-none-any.whl
(16.7 kB
view hashes)