Column-wise type annotations for pyspark DataFrames
Project description
We love Spark! But in production code we’re wary when we see:
from pyspark.sql import DataFrame
def foo(df: DataFrame) -> DataFrame:
# do stuff
return df
Because… How do we know which columns are supposed to be in df?
Using typedspark, we can be more explicit about what these data should look like.
from typedspark import Column, DataSet, Schema
from pyspark.sql.types import LongType, StringType
class Person(Schema):
id: Column[LongType]
name: Column[StringType]
age: Column[LongType]
def foo(df: DataSet[Person]) -> DataSet[Person]:
# do stuff
return df
The advantages include:
Improved readibility of the code
Typechecking, both during runtime and linting
Auto-complete of column names
Easy refactoring of column names
Easier unit testing through the generation of empty DataSets based on their schemas
Improved documentation of tables
Installation
You can install typedspark from pypi by running:
pip install typedspark
By default, typedspark does not list pyspark as a dependency, since many platforms (e.g. Databricks) come with pyspark preinstalled. If you want to install typedspark with pyspark, you can run:
pip install "typedspark[pyspark]"
Documentation
Please see our documentation on readthedocs.
FAQ
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for typedspark-1.0.11-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 71216f2e6e8828ad61e940431017f658c5826527242fd8ccb587d2d85e3c9088 |
|
MD5 | 0f4540bc54605d28c8fa8933908a76e2 |
|
BLAKE2b-256 | ce3cae443c63bc95f1b661e3e9c86fa01cf871374e8c68088f633bf80d209d4a |