A light-weight and flexible data validation and testing tool for statistical data objects.
Project description
A Statistical Data Testing Toolkit
A data validation library for scientists, engineers, and analysts seeking correctness.
pandera
provides a flexible and expressive API for performing data
validation on dataframe-like objects to make data processing pipelines more
readable and robust.
Dataframes contain information that pandera
explicitly validates at runtime.
This is useful in production-critical or reproducible research settings. With
pandera
, you can:
- Define a schema once and use it to validate different dataframe types including pandas, dask, modin, and pyspark.
- Check the types and
properties of columns in a
DataFrame
or values in aSeries
. - Perform more complex statistical validation like hypothesis testing.
- Seamlessly integrate with existing data analysis/processing pipelines via function decorators.
- Define schema models with the class-based API with pydantic-style syntax and validate dataframes using the typing syntax.
- Synthesize data from schema objects for property-based testing with pandas data structures.
- Lazily Validate dataframes so that all validation checks are executed before raising an error.
- Integrate with a rich ecosystem of python tools like pydantic, fastapi, and mypy.
Documentation
The official documentation is hosted on ReadTheDocs: https://pandera.readthedocs.io
Install
Using pip:
pip install pandera
Using conda:
conda install -c conda-forge pandera
Extras
Installing additional functionality:
pip
pip install pandera[hypotheses] # hypothesis checks
pip install pandera[io] # yaml/script schema io utilities
pip install pandera[strategies] # data synthesis strategies
pip install pandera[mypy] # enable static type-linting of pandas
pip install pandera[fastapi] # fastapi integration
pip install pandera[dask] # validate dask dataframes
pip install pandera[pyspark] # validate pyspark dataframes
pip install pandera[modin] # validate modin dataframes
pip install pandera[modin-ray] # validate modin dataframes with ray
pip install pandera[modin-dask] # validate modin dataframes with dask
pip install pandera[geopandas] # validate geopandas geodataframes
conda
conda install -c conda-forge pandera-hypotheses # hypothesis checks
conda install -c conda-forge pandera-io # yaml/script schema io utilities
conda install -c conda-forge pandera-strategies # data synthesis strategies
conda install -c conda-forge pandera-mypy # enable static type-linting of pandas
conda install -c conda-forge pandera-fastapi # fastapi integration
conda install -c conda-forge pandera-dask # validate dask dataframes
conda install -c conda-forge pandera-pyspark # validate pyspark dataframes
conda install -c conda-forge pandera-modin # validate modin dataframes
conda install -c conda-forge pandera-modin-ray # validate modin dataframes with ray
conda install -c conda-forge pandera-modin-dask # validate modin dataframes with dask
conda install -c conda-forge pandera-geopandas # validate geopandas geodataframes
Quick Start
import pandas as pd
import pandera as pa
# data to validate
df = pd.DataFrame({
"column1": [1, 4, 0, 10, 9],
"column2": [-1.3, -1.4, -2.9, -10.1, -20.4],
"column3": ["value_1", "value_2", "value_3", "value_2", "value_1"]
})
# define schema
schema = pa.DataFrameSchema({
"column1": pa.Column(int, checks=pa.Check.le(10)),
"column2": pa.Column(float, checks=pa.Check.lt(-1.2)),
"column3": pa.Column(str, checks=[
pa.Check.str_startswith("value_"),
# define custom checks as functions that take a series as input and
# outputs a boolean or boolean Series
pa.Check(lambda s: s.str.split("_", expand=True).shape[1] == 2)
]),
})
validated_df = schema(df)
print(validated_df)
# column1 column2 column3
# 0 1 -1.3 value_1
# 1 4 -1.4 value_2
# 2 0 -2.9 value_3
# 3 10 -10.1 value_2
# 4 9 -20.4 value_1
Schema Model
pandera
also provides an alternative API for expressing schemas inspired
by dataclasses and
pydantic. The equivalent SchemaModel
for the above DataFrameSchema
would be:
from pandera.typing import Series
class Schema(pa.SchemaModel):
column1: Series[int] = pa.Field(le=10)
column2: Series[float] = pa.Field(lt=-1.2)
column3: Series[str] = pa.Field(str_startswith="value_")
@pa.check("column3")
def column_3_check(cls, series: Series[str]) -> Series[bool]:
"""Check that values have two elements after being split with '_'"""
return series.str.split("_", expand=True).shape[1] == 2
Schema.validate(df)
Development Installation
git clone https://github.com/pandera-dev/pandera.git
cd pandera
pip install -r requirements-dev.txt
pip install -e .
Tests
pip install pytest
pytest tests
Contributing to pandera
All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
A detailed overview on how to contribute can be found in the contributing guide on GitHub.
Issues
Go here to submit feature requests or bugfixes.
Need Help?
There are many ways of getting help with your questions. You can ask a question on Github Discussions page or reach out to the maintainers and pandera community on Discord
Why pandera
?
- dataframe-centric data types, column nullability, and uniqueness are first-class concepts.
- Define schema models with the class-based API with pydantic-style syntax and validate dataframes using the typing syntax.
check_input
andcheck_output
decorators enable seamless integration with existing code.Check
s provide flexibility and performance by providing access topandas
API by design and offers built-in checks for common data tests.Hypothesis
class provides a tidy-first interface for statistical hypothesis testing.Check
s andHypothesis
objects support both tidy and wide data validation.- Use schemas as generative contracts to synthesize data for unit testing.
- Schema inference allows you to bootstrap schemas from data.
Alternative Data Validation Libraries
Here are a few other alternatives for validating Python data structures.
Generic Python object data validation
pandas
-specific data validation
- opulent-pandas
- PandasSchema
- pandas-validator
- table_enforcer
- dataenforce
- strictly typed pandas
- marshmallow-dataframe
Other tools for data validation
How to Cite
If you use pandera
in the context of academic or industry research, please
consider citing the paper and/or software package.
Paper
@InProceedings{ niels_bantilan-proc-scipy-2020,
author = { {N}iels {B}antilan },
title = { pandera: {S}tatistical {D}ata {V}alidation of {P}andas {D}ataframes },
booktitle = { {P}roceedings of the 19th {P}ython in {S}cience {C}onference },
pages = { 116 - 124 },
year = { 2020 },
editor = { {M}eghann {A}garwal and {C}hris {C}alloway and {D}illon {N}iederhut and {D}avid {S}hupe },
doi = { 10.25080/Majora-342d178e-010 }
}
Software Package
License and Credits
pandera
is licensed under the MIT license and is written and
maintained by Niels Bantilan (niels@pandera.ci)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file pandera-0.13.4.tar.gz
.
File metadata
- Download URL: pandera-0.13.4.tar.gz
- Upload date:
- Size: 107.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.10.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6ef2b7ee00d3439ac815d4347984421a08502da1020cec60c06dd0135e8aee2f |
|
MD5 | e5fcbe1af8d158684f7a173cd2c8a41c |
|
BLAKE2b-256 | bc2973500e974a2d6ff6fdde8102fc27ae9d85e9b4b998ed0178436a5cc4444b |
File details
Details for the file pandera-0.13.4-py3-none-any.whl
.
File metadata
- Download URL: pandera-0.13.4-py3-none-any.whl
- Upload date:
- Size: 122.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.10.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9e91687861406284270add1d467f204630377892e7a4b45809bb7546f0013153 |
|
MD5 | d3f01e78b870bdee33baeb3b46d356bc |
|
BLAKE2b-256 | 0e65efaa02d43606a9b24a8fe1ffdf8c80d30b3ad67c9945d3a050f7b08d1540 |