A high-level Web Crawling and Web Scraping framework
Project description
Overview
Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.
For more information including a list of features check the Scrapy homepage at: https://scrapy.org
Requirements
Python 2.7 or Python 3.4+
Works on Linux, Windows, Mac OSX, BSD
Install
The quick way:
pip install scrapy
For more details see the install section in the documentation: https://doc.scrapy.org/en/latest/intro/install.html
Documentation
Documentation is available online at https://doc.scrapy.org/ and in the docs directory.
Releases
You can find release notes at https://doc.scrapy.org/en/latest/news.html
Community (blog, twitter, mail list, IRC)
Contributing
See https://doc.scrapy.org/en/master/contributing.html
Code of Conduct
Please note that this project is released with a Contributor Code of Conduct (see https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md).
By participating in this project you agree to abide by its terms. Please report unacceptable behavior to opensource@scrapinghub.com.
Companies using Scrapy
Commercial Support
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for Scrapy-1.5.0-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 08d86737c560dcc1c4b73ac0ac5bd8d14b3e2265c1f7b195f0b73ab13741fe03 |
|
MD5 | 33e499743889907a131664e7158612a6 |
|
BLAKE2b-256 | db9ccb15b2dc6003a805afd21b9b396e0e965800765b51da72fe17cf340b9be2 |