Tool for General Purpose Web Scraping and Crawling
Project description
“scrawler” = “scraper” + “crawler”
Provides functionality for the automatic collection of website data (web scraping) and following links to map an entire domain (crawling). It can handle these tasks individually, or process several websites/domains in parallel using asyncio and multithreading.
This project was initially developed while working at the Fraunhofer Institute for Systems and Innovation Research. Many thanks for the opportunity and support!
Installation
You can install scrawler from PyPI:
pip install scrawler
Getting Started
Check out the Getting Started Guide.
Documentation
Documentation is available at Read the Docs.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
scrawler-0.3.1.tar.gz
(32.6 kB
view hashes)
Built Distribution
scrawler-0.3.1-py3-none-any.whl
(40.2 kB
view hashes)