Column-wise type annotations for pyspark DataFrames
Project description
I love Spark! But in production code I’m always a bit wary when I see:
from pyspark.sql import DataFrame
def foo(df: DataFrame) -> DataFrame:
# do stuff
return df
Because… How do I know which columns are supposed to be in df?
Using typedspark, we can be more explicit about what these data should look like.
from typedspark import Column, DataSet, Schema
from pyspark.sql.types import LongType, StringType
class Person(Schema):
id: Column[LongType]
name: Column[StringType]
age: Column[LongType]
def foo(df: DataSet[Person]) -> DataSet[Person]:
# do stuff
return df
The advantages include:
Improved readibility of the code
Typechecking, both during runtime and linting
Auto-complete of column names
Easy refactoring of column names
Easier unit testing through the generation of empty DataSets based on their schemas
Improved documentation of tables
Installation
pip install typedspark
Documentation
Please see our documentation here.
FAQ
I found a bug! What should I do?
Great! Please make an issue and we’ll look into it.
I have a great idea to improve typedspark! How can we make this work?
Awesome, please make an issue and let us know!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
typedspark-0.0.3.tar.gz
(18.6 kB
view hashes)
Built Distribution
typedspark-0.0.3-py3-none-any.whl
(24.6 kB
view hashes)
Close
Hashes for typedspark-0.0.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e733065712ab57fc79364e118126e9d5f26735817d36098fe0f0800f6ec125c2 |
|
MD5 | cd4ce4408438d0e78c6e8493cfe3ce0b |
|
BLAKE2b-256 | 6eab332821080f80517f93e2fdb0291220d905645bb6e655fee8158992e4c247 |