Skip to main content

Lighthouse for Python: a package facilitating the creation of data pipelines.

Project description


This is a port of Lighthouse, a library written in Scala, that facilitates the creation of data pipelines that are based on Apache Spark. It also comes with some related convenience functions, like integrations to the AWS parameter store.

This port is targeted at Python and PySpark. It is not an exact port of the Scala code: we add what we need as we go along.


One of this library’s main usages is to build a class-based data catalog, that supports chaining of sources. For example, if you had a dataset in a text file that needed to be transformed (clean, derive some statistic, …) then you could write this as such:

from pyhouse.datalake.file_system_data_link import FileSystemDataLink

link = FileSystemDataLink(
    session = get_spark(),
    path = "s3://bucket-foo/file-bar.csv",
    partitioned_by=("some-key", "another-key"),
    options={"header": True, "sep": "\t"}

The advantage of such data links becomes clear when there are multiple of them that are combined in a module (the “catalog”): there would be one source of truth that many scripts can refer to. Hardcoded paths scattered across scripts would be a thing of the past.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyhouse-0.0.13.tar.gz (6.4 kB view hashes)

Uploaded source

Built Distribution

pyhouse-0.0.13-py3-none-any.whl (14.6 kB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page