Skip to main content

Column-wise type annotations for pyspark DataFrames

Project description

Typedspark: column-wise type annotations for pyspark DataFrames

We love Spark! But in production code we're wary when we see:

from pyspark.sql import DataFrame

def foo(df: DataFrame) -> DataFrame:
    # do stuff
    return df

Because… How do we know which columns are supposed to be in df?

Using typedspark, we can be more explicit about what these data should look like.

from typedspark import Column, DataSet, Schema
from pyspark.sql.types import LongType, StringType

class Person(Schema):
    id: Column[LongType]
    name: Column[StringType]
    age: Column[LongType]

def foo(df: DataSet[Person]) -> DataSet[Person]:
    # do stuff
    return df

The advantages include:

  • Improved readability of the code
  • Typechecking, both during runtime and linting
  • Auto-complete of column names
  • Easy refactoring of column names
  • Easier unit testing through the generation of empty DataSets based on their schemas
  • Improved documentation of tables

Documentation

Please see our documentation on readthedocs.

Installation

You can install typedspark from pypi by running:

pip install typedspark

By default, typedspark does not list pyspark as a dependency, since many platforms (e.g. Databricks) come with pyspark preinstalled. If you want to install typedspark with pyspark, you can run:

pip install "typedspark[pyspark]"

Demo videos

IDE demo

https://github.com/kaiko-ai/typedspark/assets/47976799/e6f7fa9c-6d14-4f68-baba-fe3c22f75b67

You can find the corresponding code here.

Jupyter / Databricks notebooks demo

https://github.com/kaiko-ai/typedspark/assets/47976799/39e157c3-6db0-436a-9e72-44b2062df808

You can find the corresponding code here.

FAQ

I found a bug! What should I do?
Great! Please make an issue and we'll look into it.

I have a great idea to improve typedspark! How can we make this work?
Awesome, please make an issue and let us know!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

typedspark-1.5.0.tar.gz (27.2 kB view details)

Uploaded Source

Built Distribution

typedspark-1.5.0-py3-none-any.whl (35.1 kB view details)

Uploaded Python 3

File details

Details for the file typedspark-1.5.0.tar.gz.

File metadata

  • Download URL: typedspark-1.5.0.tar.gz
  • Upload date:
  • Size: 27.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.4

File hashes

Hashes for typedspark-1.5.0.tar.gz
Algorithm Hash digest
SHA256 78170cb87c0b7ee21a0935e7240968878bf969332acfd2ee4a0ee05d4fd425c6
MD5 cedcfad96d47c08fd463db568c5c7fe1
BLAKE2b-256 0000752ed241d4372b0cb2a52277a32517aa8a4f26f9e557ea77e5391b74a1b4

See more details on using hashes here.

File details

Details for the file typedspark-1.5.0-py3-none-any.whl.

File metadata

  • Download URL: typedspark-1.5.0-py3-none-any.whl
  • Upload date:
  • Size: 35.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.4

File hashes

Hashes for typedspark-1.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2bcadc53f89a704fca31d477aeb098d7afc6537cb9748ddf2b1190e165b53de9
MD5 79609bd78369745e571f682443fb8d9f
BLAKE2b-256 055bf848c81b0508a68b80177affb2116b1949394012b0a107f29c5fc379703a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page