Python MapReduce framework
mrjob is a Python 2.7/3.4+ package that helps you write and run Hadoop Streaming jobs.
mrjob fully supports Amazon’s Elastic MapReduce (EMR) service, which allows you to buy time on a Hadoop cluster on an hourly basis. mrjob has basic support for Google Cloud Dataproc (Dataproc) which allows you to buy time on a Hadoop cluster on a minute-by-minute basis. It also works with your own Hadoop cluster.
Some important features:
- Run jobs on EMR, Google Cloud Dataproc, your own Hadoop cluster, or locally (for testing).
- Write multi-step jobs (one map-reduce step feeds into the next)
- Easily launch Spark jobs on EMR or your own Hadoop cluster
- Duplicate your production environment inside Hadoop
- Upload your source tree and put it in your job’s $PYTHONPATH
- Run make and other setup scripts
- Set environment variables (e.g. $TZ)
- Easily install python packages from tarballs (EMR only)
- Setup handled transparently by mrjob.conf config file
- Automatically interpret error logs
- SSH tunnel to hadoop job tracker (EMR only)
- Minimal setup
- To run on EMR, set $AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY
- To run on Dataproc, set $GOOGLE_APPLICATION_CREDENTIALS
- No setup needed to use mrjob on your own Hadoop cluster
pip install mrjob
As of v0.7.0, Amazon Web Services and Google Cloud Services are optional depedencies. To use these, install with the aws and google targets, respectively. For example:
pip install mrjob[aws]
A Simple Map Reduce Job
Code for this example and more live in mrjob/examples.
"""The classic MapReduce job: count the frequency of words. """ from mrjob.job import MRJob import re WORD_RE = re.compile(r"[\w']+") class MRWordFreqCount(MRJob): def mapper(self, _, line): for word in WORD_RE.findall(line): yield (word.lower(), 1) def combiner(self, word, counts): yield (word, sum(counts)) def reducer(self, word, counts): yield (word, sum(counts)) if __name__ == '__main__': MRWordFreqCount.run()
Try It Out!
# locally python mrjob/examples/mr_word_freq_count.py README.rst > counts # on EMR python mrjob/examples/mr_word_freq_count.py README.rst -r emr > counts # on Dataproc python mrjob/examples/mr_word_freq_count.py README.rst -r dataproc > counts # on your Hadoop cluster python mrjob/examples/mr_word_freq_count.py README.rst -r hadoop > counts
Setting up EMR on Amazon
Setting up Dataproc on Google
- Create a Google Cloud Platform account, see top-right
- Learn about Google Cloud Platform “projects”
- Select or create a Cloud Platform Console project
- Enable billing for your project
- Go to the API Manager and search for / enable the following APIs…
- Google Cloud Storage
- Google Cloud Storage JSON API
- Google Cloud Dataproc API
- Under Credentials, Create Credentials and select Service account key. Then, select New service account, enter a Name and select Key type JSON.
- Install the Google Cloud SDK
To run in other AWS regions, upload your source tree, run make, and use other advanced mrjob features, you’ll need to set up mrjob.conf. mrjob looks for its conf file in:
- The contents of $MRJOB_CONF
See the mrjob.conf documentation for more information.
- PyCon 2011 mrjob overview
- Introduction to Recommendations and MapReduce with mrjob (source code)
- Social Graph Analysis Using Elastic MapReduce and PyPy
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size mrjob-0.7.4-py2.py3-none-any.whl (439.6 kB)||File type Wheel||Python version py2.py3||Upload date||Hashes View|
|Filename, size mrjob-0.7.4.tar.gz (652.4 kB)||File type Source||Python version None||Upload date||Hashes View|
Hashes for mrjob-0.7.4-py2.py3-none-any.whl