Skip to main content

Spark based ETL

Project description

Introduction

This Spark package is designed to process data from various sources, perform transformations, and write the results to different sinks. It follows the pipeline design pattern to provide a flexible and modular approach to data processing.

Setup

Install this package using:

pip install pysparkify

Create spark_config.conf file of this format to enter all configurations related to spark in config/spark_config.conf

[SPARK]
spark.master=local[*]
spark.app.name=PysparkifyApp
spark.executor.memory=4g
spark.driver.memory=2g

This library abstracts Spark data processing workflows. For example you would like to;

  • take first two rows of the data, save it as a separate output
  • take an average and save it as a separate output

with below sample data

name,age,city
Hayaan,10,Islamanad
Jibraan,8,ShahAlam
Allyan,3,Paris
John,35,San Francisco
Doe,22,Houston
Dane,30,Seattle

Your recipe reads the csv data as source, transforms the data and optionally save the output of each transformation to sink. Below would be the recipe.yml for this operation.

source:
  - type: CsvSource
    config:
      name: csv
      path: "resources/data/input_data.csv"

transformer:
  - type: SQLTransformer
    config:
      name: transformer1
      source: 
        - name: csv
          as_name: t1
      statement: 
        - sql: "SELECT * from t1 limit 2"
          as_name: trx1
          to_sink: sink1
        - sql: "select AVG(age) from trx1"
          as_name: trx2
          to_sink: sink2

sink:
  - type: CsvSink
    config:
      name: sink1
      path: "output/output_data.csv"
  - type: CsvSink
    config:
      name: sink2
      path: "output/avgage_data.csv"
      

Usage

This library can be run as a command line tool:

pysparkify your_recipe.yml

Or use it in your Python scripts:

from pysparkify.lib.app import run
run('your_recipe.yml')

Design

The package is structured as follows:

Source, Sink and Transformer Abstraction

The package defines abstract classes Source, Sink and Transformer to represent data sources, sinks and transformers. It also provides concrete classes, including CsvSource, CsvSink and SQLTransformer, which inherit from the abstract classes. This design allows you to add new source and sink types with ease.

Configuration via recipe.yml

The package reads its configuration from a recipe.yml file. This YAML file specifies the source, sink, and transformation configurations. It allows you to define different data sources, sinks, and transformation queries.

Transformation Queries

Transformations are performed by SQLTransformer using Spark SQL queries defined in the configuration. These queries are executed on the data from the source before writing it to the sink. New transformers can be implemented by extending Transformer abstract class that can take spark dataframes from sources to process and send dataframes to sinks to save.

Pipeline Execution

The package reads data from the specified source, performs transformations based on the configured SQL queries, and then writes the results to the specified sink. You can configure multiple sources and sinks within the same package.

How to Contribute

  1. There are plenty of ways, in which implementation of new Sources and Sinks top the list
  2. Open a PR
  3. PR is reviewed and approved, included github actions will deploy the version directly to pypi repository

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pysparkify-0.26.2.tar.gz (8.4 kB view details)

Uploaded Source

Built Distribution

pysparkify-0.26.2-py3-none-any.whl (10.1 kB view details)

Uploaded Python 3

File details

Details for the file pysparkify-0.26.2.tar.gz.

File metadata

  • Download URL: pysparkify-0.26.2.tar.gz
  • Upload date:
  • Size: 8.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for pysparkify-0.26.2.tar.gz
Algorithm Hash digest
SHA256 e0b4583d135bfd2998dedb4fce38abc79af006350a0bf2a0eec977b0894bb3d0
MD5 68a9d5311316d2f14a344de249550712
BLAKE2b-256 f4a67c13b33b468a4cd86f165165a49efdb403fdebbe0f26fb3b6699911d90a3

See more details on using hashes here.

File details

Details for the file pysparkify-0.26.2-py3-none-any.whl.

File metadata

  • Download URL: pysparkify-0.26.2-py3-none-any.whl
  • Upload date:
  • Size: 10.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for pysparkify-0.26.2-py3-none-any.whl
Algorithm Hash digest
SHA256 4c61144c606401c3d5a10dd4d98bf7fa136f735f4d5293ff3174cae43fb6de30
MD5 196b53c0dfd0805c6b78d5ebd4f3ad77
BLAKE2b-256 eb61f1183dcb7e84c245edb0687ef77d6a44277dd76d777ea6af3659adb55c40

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page