Skip to main content

A lightweight framework to build and execute data processing pipelines in pyspark (Apache Spark's python API)

Project description

sparklanes

PyPI versions Build Status Coverage Status Doc status pylint Score license

sparklanes is a lightweight data processing framework for Apache Spark written in Python. It was built with the intention to make building complex spark processing pipelines simpler, by shifting the focus towards writing data processing code without having to spent much time on the surrounding application architecture.

Data processing pipelines, or lanes, are built by stringing together encapsulated processor classes, which allows creation of lane definitions with an arbitrary processor order, where processors can be easily removed, added or swapped.

Processing pipelines can be defined using lane configuration YAML files, to then be packaged and submitted to spark using a single command. Alternatively, the same can be achieved manually by using the framework's API.

Documentation

Find the documentation on sparklanes.readthedocs.io

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sparklanes-0.2.1.tar.gz (14.9 kB view hashes)

Uploaded source

Built Distributions

sparklanes-0.2.1-py3-none-any.whl (17.9 kB view hashes)

Uploaded py3

sparklanes-0.2.1-py2-none-any.whl (17.9 kB view hashes)

Uploaded py2

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Huawei Huawei PSF Sponsor Microsoft Microsoft PSF Sponsor NVIDIA NVIDIA PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page