Skip to main content

Extract data from common crawl using elastic map reduce

Project description

https://img.shields.io/badge/License-Apache%202.0-blue.svg https://travis-ci.org/qadium-memex/CommonCrawlJob.svg?branch=master https://badge.fury.io/py/CommonCrawlJob.svg

This work is supported by Qadium Inc as a part of the DARPA Memex Program.

Installation

The easiest way to get started is using pip to install a copy of this library. This will install the stable latest version hosted on PyPI.

$ pip install -e git+https://github.com/qadium-memex/CommonCrawlJob.git#egg=ccjob

Another way is to directly install the code from github to get the bleeding edge version of the code. If that is the case, you can still use pip by pointing it to github and specifying the protocol.

$ pip install CommonCrawlJob

Compatibility

Unfortunately, this code does not yet compatible with Python 3 and Python/PyPy 2.7 are the only current implementations which are tested against. Unfortunately the library for encoding WARC (Web Archive) file formats will need to undergo a rewrite it is possible to have deterministic IO behavior.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
CommonCrawlJob-0.1.0-py2-none-any.whl (11.5 kB) Copy SHA256 hash SHA256 Wheel 2.7 Aug 24, 2016
CommonCrawlJob-0.1.0.tar.gz (312.4 kB) Copy SHA256 hash SHA256 Source None Aug 24, 2016

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page