A high-level Web Crawling and Web Scraping framework
Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.
For more information including a list of features check the Scrapy homepage at: http://scrapy.org
- Python 2.7 or Python 3.3+
- Works on Linux, Windows, Mac OSX, BSD
The quick way:
pip install scrapy
For more details see the install section in the documentation: http://doc.scrapy.org/en/latest/intro/install.html
You can download the latest stable and development releases from: http://scrapy.org/download/
Documentation is available online at http://doc.scrapy.org/ and in the docs directory.
Community (blog, twitter, mail list, IRC)
Please note that this project is released with a Contributor Code of Conduct (see https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md).
By participating in this project you agree to abide by its terms. Please report unacceptable behavior to firstname.lastname@example.org.
Companies using Scrapy
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size cyberplant_Scrapy-1.2.0.dev2-py2.py3-none-any.whl (294.6 kB)||File type Wheel||Python version py2.py3||Upload date||Hashes View hashes|
|Filename, size cyberplant-Scrapy-1.2.0.dev2.tar.gz (821.4 kB)||File type Source||Python version None||Upload date||Hashes View hashes|
Hashes for cyberplant_Scrapy-1.2.0.dev2-py2.py3-none-any.whl
Hashes for cyberplant-Scrapy-1.2.0.dev2.tar.gz