Tool for General Purpose Web Scraping and Crawling
“scrawler” = “scraper” + “crawler”
Provides functionality for the automatic collection of website data (web scraping) and following links to map an entire domain (crawling). It can handle these tasks individually, or process several websites/domains in parallel using asyncio and multithreading.
This project was initially developed while working at the Fraunhofer Institute for Systems and Innovation Research. Many thanks for the opportunity and support!
You can install scrawler from PyPI:
pip install scrawler
Check out the Getting Started Guide.
Documentation is available at Read the Docs.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.