Python MapReduce framework
Project description
mrjob is a Python 2.7/3.5+ package that helps you write and run Hadoop Streaming jobs.
Stable version (v0.6.9) documentation
Development version documentation
mrjob fully supports Amazon’s Elastic MapReduce (EMR) service, which allows you to buy time on a Hadoop cluster on an hourly basis. mrjob has basic support for Google Cloud Dataproc (Dataproc) which allows you to buy time on a Hadoop cluster on a minute-by-minute basis. It also works with your own Hadoop cluster.
Some important features:
Run jobs on EMR, Google Cloud Dataproc, your own Hadoop cluster, or locally (for testing).
Write multi-step jobs (one map-reduce step feeds into the next)
Easily launch Spark jobs on EMR or your own Hadoop cluster
Duplicate your production environment inside Hadoop
Upload your source tree and put it in your job’s $PYTHONPATH
Run make and other setup scripts
Set environment variables (e.g. $TZ)
Easily install python packages from tarballs (EMR only)
Setup handled transparently by mrjob.conf config file
Automatically interpret error logs
SSH tunnel to hadoop job tracker (EMR only)
Minimal setup
To run on EMR, set $AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY
To run on Dataproc, set $GOOGLE_APPLICATION_CREDENTIALS
No setup needed to use mrjob on your own Hadoop cluster
Installation
From PyPI:
pip install mrjob
From source:
python setup.py install
A Simple Map Reduce Job
Code for this example and more live in mrjob/examples.
"""The classic MapReduce job: count the frequency of words. """ from mrjob.job import MRJob import re WORD_RE = re.compile(r"[\w']+") class MRWordFreqCount(MRJob): def mapper(self, _, line): for word in WORD_RE.findall(line): yield (word.lower(), 1) def combiner(self, word, counts): yield (word, sum(counts)) def reducer(self, word, counts): yield (word, sum(counts)) if __name__ == '__main__': MRWordFreqCount.run()
Try It Out!
# locally python mrjob/examples/mr_word_freq_count.py README.rst > counts # on EMR python mrjob/examples/mr_word_freq_count.py README.rst -r emr > counts # on Dataproc python mrjob/examples/mr_word_freq_count.py README.rst -r dataproc > counts # on your Hadoop cluster python mrjob/examples/mr_word_freq_count.py README.rst -r hadoop > counts
Setting up EMR on Amazon
create an Amazon Web Services account
Get your access and secret keys (click “Security Credentials” on your account page)
Set the environment variables $AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY accordingly
Setting up Dataproc on Google
Create a Google Cloud Platform account, see top-right
Go to the API Manager and search for / enable the following APIs…
Google Cloud Storage
Google Cloud Storage JSON API
Google Cloud Dataproc API
Under Credentials, Create Credentials and select Service account key. Then, select New service account, enter a Name and select Key type JSON.
Install the Google Cloud SDK
Advanced Configuration
To run in other AWS regions, upload your source tree, run make, and use other advanced mrjob features, you’ll need to set up mrjob.conf. mrjob looks for its conf file in:
The contents of $MRJOB_CONF
~/.mrjob.conf
/etc/mrjob.conf
See the mrjob.conf documentation for more information.
Project Links
Reference
More Information
Thanks to Greg Killion (ROMEO ECHO_DELTA) for the logo.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for mrjob-0.6.9-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6fd2f2975b0be772cddfd26c60d87a5b8304eea05b7bb4470f9c9ed4b3dc7a64 |
|
MD5 | 4e9924f993e945ddf16b05317a9a1848 |
|
BLAKE2b-256 | 557639f7783033d2ef0ecf301f29b47015c47fd5dd6e7cde01fe476a7fa1d2d6 |