Skip to main content

Run/Monitor your PySpark Jobs in Amazon EMR

Project description

EMRFlow :cyclone:

EMRFlow is designed to simplify the process of running PySpark jobs on Amazon EMR (Elastic Map Reduce). It abstracts the complexities of interacting with EMR APIs and provides an intuitive command-line interface and python library to effortlessly submit, monitor, and list your EMR PySpark jobs.

EMRFlow serves as both a library and a command-line tool.

To install EMRFlow, please run:

pip install emrflow

Configuration

Create an emr_serverless_config.json file containing the specified details and store it in your home directory

{
    "application_id": "",
    "job_role": "",
    "region": ""
}

Usage

Please read the GETTING STARTED to integrate EMRFlow into your project.

EMRFlow offers several commands to manage your Pypark jobs. Let's explore some key functionalities:

Help

emrflow serverless --help

Serverless Options

Package Dependencies

You will need to package dependencies before running an EMR job if you have external libraries needing to be installed or local imports from your code base. See Scenario 2-4 in GETTING STARTED.

emrflow serverless package-dependencies --help

Serverless Options

Submit PySpark Job

emrflow serverless run --help

Serverless Options

emrflow serverless run \
        --job-name "<job-name>" \
        --entry-point "<location-of-main-python-file>" \
        --spark-submit-parameters " --conf spark.executor.cores=8 \
                                    --conf spark.executor.memory=32g \
                                    --conf spark.driver.cores=8 \
                                    --conf spark.driver.memory=32g \
                                    --conf spark.dynamicAllocation.maxExecutors=100" \
        --s3-code-uri "s3://<emr-s3-path>" \
        --s3-logs-uri "s3://<emr-s3-path>/logs" \
        --execution-timeout 0 \
        --ping-duration 60 \
        --wait \
        --show-output

List Previous Runs

emrflow serverless list-job-runs --help

Get Logs of Previous Runs

emrflow serverless get-logs --help

Serverless Options

Use EMRFlow as an API

import os
from emrflow import emr_serverless

# initialize connection
emr_serverless.init_connection()

# submit job to EMR Serverless
emr_job_id = emr_serverless.run(
    job_name="<job-name>",
    entry_point="<location-of-main-python-file>",
    spark_submit_parameters="--conf spark.executor.cores=8 \
                            --conf spark.executor.memory=32g \
                            --conf spark.driver.cores=8 \
                            --conf spark.driver.memory=32g \
                            --conf spark.dynamicAllocation.maxExecutors=100",
    wait=True,
    show_output=True,
    s3_code_uri="s3://<emr-s3-path>",
    s3_logs_uri="s3://<emr-s3-path>/logs",
    execution_timeout=0,
    ping_duration=60,
    tags=["env:dev"],
)
print(emr_job_id)

And so much more.......!!!

Contributing

We welcome contributions to EMRFlow. Please open an issue discussing the change you would like to see. Create a feature branch to work on that issue and open a Pull Request once it is ready for review.

Code style

We use black as a code formatter. The easiest way to ensure your commits are always formatted with the correct version of black it is to use pre-commit: install it and then run pre-commit install once in your local copy of the repo.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

emrflow-1.4.1.tar.gz (16.7 kB view hashes)

Uploaded Source

Built Distribution

emrflow-1.4.1-py3-none-any.whl (18.9 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page