Skip to main content

Generic ETL Pipeline Framework for Apache Spark

Project description

Overview

Goal

There are many public clouds provide managed Apache Spark as service, such as databricks, AWS EMR, Oracle OCI DataFlow, see the table below for a detailed list.

However, the way to deploy Spark application and launch Spark application are incompatible between different cloud Spark platforms.

Now with spark-etl, you can deploy and launch your Spark application in a standard way.

Benefit

Your application using spark-etl can be deployed and launched from different Spark providers without changing the source code. Please check out the demos in the tables below.

Application

An application is a python program. It contains:

  • A main.py file which contain the application entry
  • A manifest.json file, which specify the metadata of the application.
  • A requirements.txt file, which specify the application dependency.

Application entry signature

In your application's main.py, you shuold have a main function with the following signature:

  • spark is the spark session object
  • input_args a dict, is the argument user specified when running this application.
  • sysops is the system options passed, it is platform specific. Job submitter may inject platform specific object in sysops object.
  • Your main function's return value should be a JSON object, it will be returned from the job submitter to the caller.
def main(spark, input_args, sysops={}):
    # your code here

Here is an application example.

Build your application

etl -a build -c <config-filename> -p <application-name>

For details, please checkout examples below.

Deploy your application

etl -a deploy -c <config-filename> -p <application-name> -f <profile-name>

For details, please checkout examples below.

Run your application

etl -a run -c <config-filename> -p <application-name> -f <profile-name> --run-args <input-filename>

For details, please checkout examples below.

Supported platforms

You setup your own Apache Spark Cluster.
Use PySpark package, fully compatible to other spark platform, allows you to test your pipeline in a single computer.
  • Demo: Access Data on local filesystem
  • Demo: Access Data on AWS S3
  • You host your spark cluster in databricks
    You host your spark cluster in Amazon AWS EMR
  • Demo: Access Data on AWS S3
  • You host your spark cluster in Google Cloud
    You host your spark cluster in Microsoft Azure HDInsight
    You host your spark cluster in Oracle Cloud Infrastructure, Data Flow Service
    You host your spark cluster in IBM Cloud

    APIs

    pydocs for APIs

    Job Deployer

    For job deployers, please check the wiki .

    Job Submitter

    For job submitters, please check the wiki

    Project details


    Release history Release notifications | RSS feed

    Download files

    Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

    Source Distribution

    spark-etl-0.0.107.tar.gz (24.5 kB view hashes)

    Uploaded Source

    Built Distribution

    spark_etl-0.0.107-py3-none-any.whl (33.9 kB view hashes)

    Uploaded Python 3

    Supported by

    AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page