Skip to main content

A light-weight and flexible data validation and testing tool for statistical data objects.

Project description


The Open-source Framework for Precision Data Testing

📊 🔎 ✅

Data validation for scientists, engineers, and analysts seeking correctness.


CI Build Documentation Status PyPI version shields.io PyPI license pyOpenSci Project Status: Active – The project has reached a stable, usable state and is being actively developed. Documentation Status codecov PyPI pyversions DOI asv Monthly Downloads Total Downloads Conda Downloads Discord

pandera is a Union.ai open source project that provides a flexible and expressive API for performing data validation on dataframe-like objects to make data processing pipelines more readable and robust.

Dataframes contain information that pandera explicitly validates at runtime. This is useful in production-critical or reproducible research settings. With pandera, you can:

  1. Define a schema once and use it to validate different dataframe types including pandas, polars, dask, modin, and pyspark.
  2. Check the types and properties of columns in a DataFrame or values in a Series.
  3. Perform more complex statistical validation like hypothesis testing.
  4. Parse data to standardize the preprocessing steps needed to produce valid data.
  5. Seamlessly integrate with existing data analysis/processing pipelines via function decorators.
  6. Define dataframe models with the class-based API with pydantic-style syntax and validate dataframes using the typing syntax.
  7. Synthesize data from schema objects for property-based testing with pandas data structures.
  8. Lazily Validate dataframes so that all validation checks are executed before raising an error.
  9. Integrate with a rich ecosystem of python tools like pydantic, fastapi, and mypy.

Documentation

The official documentation is hosted here: https://pandera.readthedocs.io

Install

Using pip:

pip install pandera

Using conda:

conda install -c conda-forge pandera

Extras

Installing additional functionality:

pip
pip install 'pandera[hypotheses]' # hypothesis checks
pip install 'pandera[io]'         # yaml/script schema io utilities
pip install 'pandera[strategies]' # data synthesis strategies
pip install 'pandera[mypy]'       # enable static type-linting of pandas
pip install 'pandera[fastapi]'    # fastapi integration
pip install 'pandera[dask]'       # validate dask dataframes
pip install 'pandera[pyspark]'    # validate pyspark dataframes
pip install 'pandera[modin]'      # validate modin dataframes
pip install 'pandera[modin-ray]'  # validate modin dataframes with ray
pip install 'pandera[modin-dask]' # validate modin dataframes with dask
pip install 'pandera[geopandas]'  # validate geopandas geodataframes
pip install 'pandera[polars]'     # validate polars dataframes
conda
conda install -c conda-forge pandera-hypotheses  # hypothesis checks
conda install -c conda-forge pandera-io          # yaml/script schema io utilities
conda install -c conda-forge pandera-strategies  # data synthesis strategies
conda install -c conda-forge pandera-mypy        # enable static type-linting of pandas
conda install -c conda-forge pandera-fastapi     # fastapi integration
conda install -c conda-forge pandera-dask        # validate dask dataframes
conda install -c conda-forge pandera-pyspark     # validate pyspark dataframes
conda install -c conda-forge pandera-modin       # validate modin dataframes
conda install -c conda-forge pandera-modin-ray   # validate modin dataframes with ray
conda install -c conda-forge pandera-modin-dask  # validate modin dataframes with dask
conda install -c conda-forge pandera-geopandas   # validate geopandas geodataframes
conda install -c conda-forge pandera-polars      # validate polars dataframes

Quick Start

import pandas as pd
import pandera as pa


# data to validate
df = pd.DataFrame({
    "column1": [1, 4, 0, 10, 9],
    "column2": [-1.3, -1.4, -2.9, -10.1, -20.4],
    "column3": ["value_1", "value_2", "value_3", "value_2", "value_1"]
})

# define schema
schema = pa.DataFrameSchema({
    "column1": pa.Column(int, checks=pa.Check.le(10)),
    "column2": pa.Column(float, checks=pa.Check.lt(-1.2)),
    "column3": pa.Column(str, checks=[
        pa.Check.str_startswith("value_"),
        # define custom checks as functions that take a series as input and
        # outputs a boolean or boolean Series
        pa.Check(lambda s: s.str.split("_", expand=True).shape[1] == 2)
    ]),
})

validated_df = schema(df)
print(validated_df)

#     column1  column2  column3
#  0        1     -1.3  value_1
#  1        4     -1.4  value_2
#  2        0     -2.9  value_3
#  3       10    -10.1  value_2
#  4        9    -20.4  value_1

DataFrame Model

pandera also provides an alternative API for expressing schemas inspired by dataclasses and pydantic. The equivalent DataFrameModel for the above DataFrameSchema would be:

from pandera.typing import Series

class Schema(pa.DataFrameModel):

    column1: int = pa.Field(le=10)
    column2: float = pa.Field(lt=-1.2)
    column3: str = pa.Field(str_startswith="value_")

    @pa.check("column3")
    def column_3_check(cls, series: Series[str]) -> Series[bool]:
        """Check that values have two elements after being split with '_'"""
        return series.str.split("_", expand=True).shape[1] == 2

Schema.validate(df)

Development Installation

git clone https://github.com/pandera-dev/pandera.git
cd pandera
export PYTHON_VERSION=...  # specify desired python version
pip install -r dev/requirements-${PYTHON_VERSION}.txt
pip install -e .

Tests

pip install pytest
pytest tests

Contributing to pandera GitHub contributors

All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.

A detailed overview on how to contribute can be found in the contributing guide on GitHub.

Issues

Go here to submit feature requests or bugfixes.

Need Help?

There are many ways of getting help with your questions. You can ask a question on Github Discussions page or reach out to the maintainers and pandera community on Discord

Why pandera?

How to Cite

If you use pandera in the context of academic or industry research, please consider citing the paper and/or software package.

Paper

@InProceedings{ niels_bantilan-proc-scipy-2020,
  author    = { {N}iels {B}antilan },
  title     = { pandera: {S}tatistical {D}ata {V}alidation of {P}andas {D}ataframes },
  booktitle = { {P}roceedings of the 19th {P}ython in {S}cience {C}onference },
  pages     = { 116 - 124 },
  year      = { 2020 },
  editor    = { {M}eghann {A}garwal and {C}hris {C}alloway and {D}illon {N}iederhut and {D}avid {S}hupe },
  doi       = { 10.25080/Majora-342d178e-010 }
}

Software Package

DOI

License and Credits

pandera is licensed under the MIT license and is written and maintained by Niels Bantilan (niels@union.ai)

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pandera-0.21.0.tar.gz (202.7 kB view details)

Uploaded Source

Built Distribution

pandera-0.21.0-py3-none-any.whl (261.0 kB view details)

Uploaded Python 3

File details

Details for the file pandera-0.21.0.tar.gz.

File metadata

  • Download URL: pandera-0.21.0.tar.gz
  • Upload date:
  • Size: 202.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for pandera-0.21.0.tar.gz
Algorithm Hash digest
SHA256 12a1e67478dc72459eff8329036e1d3502fcc4332c9c70bdbd88a78b7597aa8a
MD5 cb5f18c2e2b54f4aaf068a16b63e08d3
BLAKE2b-256 dfc879d0d6e84e9ec24b55c1b2cd82e3d116f17b9f4c47e0aca47c45b9083003

See more details on using hashes here.

File details

Details for the file pandera-0.21.0-py3-none-any.whl.

File metadata

  • Download URL: pandera-0.21.0-py3-none-any.whl
  • Upload date:
  • Size: 261.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for pandera-0.21.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ee694182ff9f15c165d14a99a9b90bcdf95bce07c1935f73c69718db15c85809
MD5 b70130affcfe5115c7f3868e199a5a14
BLAKE2b-256 bb46036f7ce9ea162af98b65df15fa16a22fdb9813e326ef856eb82794e70871

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page