Extract data from common crawl using elastic map reduce
The easiest way to get started is using pip to install a copy of this library. This will install the stable latest version hosted on PyPI.
$ pip install -e git+https://github.com/qadium-memex/CommonCrawlJob.git#egg=ccjob
Another way is to directly install the code from github to get the bleeding edge version of the code. If that is the case, you can still use pip by pointing it to github and specifying the protocol.
$ pip install CommonCrawlJob
Unfortunately, this code does not yet compatible with Python 3 and Python/PyPy 2.7 are the only current implementations which are tested against. Unfortunately the library for encoding WARC (Web Archive) file formats will need to undergo a rewrite it is possible to have deterministic IO behavior.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size CommonCrawlJob-0.1.0-py2-none-any.whl (11.5 kB)||File type Wheel||Python version 2.7||Upload date||Hashes View|
|Filename, size CommonCrawlJob-0.1.0.tar.gz (312.4 kB)||File type Source||Python version None||Upload date||Hashes View|
Hashes for CommonCrawlJob-0.1.0-py2-none-any.whl