A high-level Web Crawling and Web Scraping framework
Project description
Overview
Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.
For more information including a list of features check the Scrapy homepage at: https://scrapy.org
Requirements
Python 2.7 or Python 3.4+
Works on Linux, Windows, Mac OSX, BSD
Install
The quick way:
pip install scrapy
For more details see the install section in the documentation: https://docs.scrapy.org/en/latest/intro/install.html
Documentation
Documentation is available online at https://docs.scrapy.org/ and in the docs directory.
Releases
You can find release notes at https://docs.scrapy.org/en/latest/news.html
Community (blog, twitter, mail list, IRC)
Contributing
See https://docs.scrapy.org/en/master/contributing.html
Code of Conduct
Please note that this project is released with a Contributor Code of Conduct (see https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md).
By participating in this project you agree to abide by its terms. Please report unacceptable behavior to opensource@scrapinghub.com.
Companies using Scrapy
Commercial Support
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for Scrapy-1.7.2-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 92057487d67103d0e83e44fac253cec407697d9ab2e343fa2f3287b31808f405 |
|
MD5 | 06a86a1877d024dd47c1a97943504555 |
|
BLAKE2b-256 | a3b1d1ab5b3f84640097cf5ff642e2e357546781746d4fec2ebb40432904c57d |