A high-level Web Crawling and Web Scraping framework
Project description
Overview
Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.
For more information including a list of features check the Scrapy homepage at: http://scrapy.org
Requirements
Python 2.7 or Python 3.3+
Works on Linux, Windows, Mac OSX, BSD
Install
The quick way:
pip install scrapy
For more details see the install section in the documentation: http://doc.scrapy.org/en/latest/intro/install.html
Releases
You can download the latest stable and development releases from: http://scrapy.org/download/
Documentation
Documentation is available online at http://doc.scrapy.org/ and in the docs directory.
Community (blog, twitter, mail list, IRC)
Contributing
Please note that this project is released with a Contributor Code of Conduct (see https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md).
By participating in this project you agree to abide by its terms. Please report unacceptable behavior to opensource@scrapinghub.com.
Companies using Scrapy
Commercial Support
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for Scrapy-1.1.4-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7762154b5e19daa23bd9c29d4663b101d3c96c0438f875f0d71071750f1b390a |
|
MD5 | b0f30270fe6da37ae8d6ae29fd2ec9db |
|
BLAKE2b-256 | 80e59552bbdc47a92638ed5b2b040bb529b23d29fd8a5fc030361e36e1762d82 |