Skip to main content

Easy pipelines for pandas.

Project description

PyPI-Status PePy stats PyPI-Versions Build-Status Codecov Codefactor code quality LICENCE

Easy pipelines for pandas DataFrames.

>>> df = pd.DataFrame(
        data=[[4, 165, 'USA'], [2, 180, 'UK'], [2, 170, 'Greece']],
        index=['Dana', 'Jane', 'Nick'],
        columns=['Medals', 'Height', 'Born']
    )
>>> import pdpipe as pdp
>>> pipeline = pdp.ColDrop('Medals').OneHotEncode('Born')
>>> pipeline(df)
            Height  Born_UK  Born_USA
    Dana     165        0         1
    Jane     180        1         0
    Nick     170        0         0

1 Installation

Install pdpipe with:

pip install pdpipe

Some pipeline stages require scikit-learn; they will simply not be loaded if scikit-learn is not found on the system, and pdpipe will issue a warning. To use them you must also install scikit-learn.

Similarly, some pipeline stages require nltk; they will simply not be loaded if nltk is not found on your system, and pdpipe will issue a warning. To use them you must additionally install nltk.

2 Features

  • A simple interface.

  • Informative prints and errors on pipeline application.

  • Chaining pipeline stages constructor calls for easy, one-liners pipelines.

  • Pipeline arithmetics.

  • Easier handling of mixed data (numeric, categorical and others).

  • Fully tested.

  • Compatible with Python 3.5+.

  • Pure Python.

2.1 Design Decisions

  • Extra infromative naming: Meant to make pipelines very readable, understanding their entire flow by pipeline stages names; e.g. ColDrop vs. ValDrop instead of an all-encompassing Drop stage emulating the pandas.DataFrame.drop method.

  • Data science-oriented naming (rather than statistics).

  • A functional approach: Pipelines never change input DataFrames. Nothing is done “in place”.

  • Opinionated operations: Help novices avoid mistake by default appliance of good practices; e.g., one-hot-encoding (creating dummy variables) a column will drop one of the resulting columns by default, to avoid the dummy variable trap (perfect multicollinearity).

  • Machine learning-oriented: The target use case is transforming tabular data into a vectorized dataset on which a machine learning model will be trained; e.g., column transformations will drop the source columns to avoid strong linear dependence.

3 Basic Use

3.1 Pipeline Stages

3.1.1 Creating Pipeline Stages

You can create stages with the following syntax:

import pdpipe as pdp
drop_name = pdp.ColDrop("Name")

All pipeline stages have a predefined precondition function that returns True for dataframes to which the stage can be applied. By default, pipeline stages raise an exception if a DataFrame not meeting their precondition is piped through. This behaviour can be set per-stage by assigning exraise with a bool in the constructor call. If exraise is set to False the input DataFrame is instead returned without change:

drop_name = pdp.ColDrop("Name", exraise=False)

3.1.2 Applying Pipeline Stages

You can apply a pipeline stage to a DataFrame using its apply method:

res_df = pdp.ColDrop("Name").apply(df)

Pipeline stages are also callables, making the following syntax equivalent:

drop_name = pdp.ColDrop("Name")
res_df = drop_name(df)

The initialized exception behaviour of a pipeline stage can be overridden on a per-application basis:

drop_name = pdp.ColDrop("Name", exraise=False)
res_df = drop_name(df, exraise=True)

Additionally, to have an explanation message print after the precondition is checked but before the application of the pipeline stage, pass verbose=True:

res_df = drop_name(df, verbose=True)

All pipeline stages also adhere to the scikit-learn transformer API, and so have fit_transform and transform methods; these behave exactly like apply, and accept the input dataframe as parameter X. For the same reason, pipeline stages also have a fit method, which applies them but returns the input dataframe unchanged.

3.1.3 Fittable Pipeline Stages

Some pipeline stages can be fitted, meaning that some transformation parameters are set the first time a dataframe is piped through the stage, while later applications of the stage use these now-set parameters without changing them; the Encode scikit-learn-dependent stage is a good example.

For these type of stages the first call to apply will both fit the stage and transform the input dataframe, while subsequent calls to apply will transform input dataframes according to the already-fitted transformation parameters.

Additionally, for fittable stages the scikit-learn transformer API methods behave as expected:

  • fit sets the transformation parameters of the stage but returns the input dataframe unchanged.

  • fit_transform both sets the transformation parameters of the stage and returns the input dataframe after transformation.

  • transform transforms input dataframes according to already-fitted transformation parameters; if the stage is not fitted, an UnfittedPipelineStageError is raised.

Again, apply, fit_transform and transform are all of equivalent for non-fittable pipeline stages. And in all cases the y parameter of these methods is ignored.

3.2 Pipelines

3.2.1 Creating Pipelines

Pipelines can be created by supplying a list of pipeline stages:

pipeline = pdp.PdPipeline([pdp.ColDrop("Name"), pdp.OneHotEncode("Label")])

Additionally, the make_pdpipeline method can be used to give stages as positional arguments.

pipeline = pdp.make_pdpipeline(pdp.ColDrop("Name"), pdp.OneHotEncode("Label"))

3.2.2 Printing Pipelines

A pipeline structre can be clearly displayed by printing the object:

>>> drop_name = pdp.ColDrop("Name")
>>> binar_label = pdp.OneHotEncode("Label")
>>> map_job = pdp.MapColVals("Job", {"Part": True, "Full":True, "No": False})
>>> pipeline = pdp.PdPipeline([drop_name, binar_label, map_job])
>>> print(pipeline)
A pdpipe pipeline:
[ 0]  Drop column Name
[ 1]  OneHotEncode Label
[ 2]  Map values of column Job with {'Part': True, 'Full': True, 'No': False}.

3.2.3 Pipeline Arithmetics

Alternatively, you can create pipelines by adding pipeline stages together:

pipeline = pdp.ColDrop("Name") + pdp.OneHotEncode("Label")

Or even by adding pipelines together or pipelines to pipeline stages:

pipeline = pdp.ColDrop("Name") + pdp.OneHotEncode("Label")
pipeline += pdp.MapColVals("Job", {"Part": True, "Full":True, "No": False})
pipeline += pdp.PdPipeline([pdp.ColRename({"Job": "Employed"})])

3.2.4 Pipeline Chaining

Pipeline stages can also be chained to other stages to create pipelines:

pipeline = pdp.ColDrop("Name").OneHotEncode("Label").ValDrop([-1], "Children")

3.2.5 Pipeline Slicing

Pipelines are Python Sequence objects, and as such can be sliced using Python’s slicing notation, just like lists:

>>> pipeline = pdp.ColDrop("Name").OneHotEncode("Label").ValDrop([-1], "Children").ApplyByCols("height", math.ceil)
>>> pipeline[0]
Drop column Name
>>> pipeline[1:2]
A pdpipe pipeline:
[ 0] OneHotEncode Label

3.2.6 Applying Pipelines

Pipelines are pipeline stages themselves, and can be applied to a DataFrame using the same syntax, applying each of the stages making them up, in order:

pipeline = pdp.ColDrop("Name") + pdp.OneHotEncode("Label")
res_df = pipeline(df)

Assigning the exraise parameter to a pipeline apply call with a bool sets or unsets exception raising on failed preconditions for all contained stages:

pipeline = pdp.ColDrop("Name") + pdp.OneHotEncode("Label")
res_df = pipeline.apply(df, exraise=False)

Additionally, passing verbose=True to a pipeline apply call will apply all pipeline stages verbosely:

res_df = pipeline.apply(df, verbose=True)

Finally, fit, transform and fit_transform all call the corresponding pipeline stage methods of all stages composing the pipeline

4 Types of Pipeline Stages

All built-in stages are thoroughly documented, including examples; if you find any documentation lacking please open an issue. A list of briefly described available built-in stages follows:

4.1 Basic Stages

  • AdHocStage - Define custom pipeline stages on the fly.

  • ColDrop - Drop columns by name.

  • ValDrop - Drop rows by by their value in specific or all columns.

  • ValKeep - Keep rows by by their value in specific or all columns.

  • ColRename - Rename columns.

  • DropNa - Drop null values. Supports all parameter supported by pandas.dropna function.

  • FreqDrop - Drop rows by value frequency threshold on a specific column.

  • ColReorder - Reorder columns.

4.2 Column Generation

  • Bin - Convert a continuous valued column to categoric data using binning.

  • OneHotEncode - Convert a categorical column to the several binary columns corresponding to it.

  • MapColVals - Replace column values by a map.

  • ApplyToRows - Generate columns by applying a function to each row.

  • ApplyByCols - Generate columns by applying an element-wise function to columns.

  • ColByFrameFunc - Add a column by applying a dataframe-wide function.

  • AggByCols - Generate columns by applying an series-wise function to columns.

  • Log - Log-transform numeric data, possibly shifting data before.

4.3 Scikit-learn-dependent Stages

  • Encode - Encode a categorical column to corresponding number values.

  • Scale - Scale data with any of the sklearn scalers.

4.4 nltk-dependent Stages

  • TokenizeWords - Tokenize a sentence into a list of tokens by whitespaces.

  • UntokenizeWords - Joins token lists into whitespace-seperated strings.

  • RemoveStopwords - Remove stopwords from a tokenized list.

  • SnowballStem - Stems tokens in a list using the Snowball stemmer.

  • DropRareTokens - Drop rare tokens from token lists.

5 Creating additional stages

5.1 Extending PdPipelineStage

To use other stages than the built-in ones (see Types of Pipeline Stages) you can extend the PdPipelineStage class. The constructor must pass the PdPipelineStage constructor the exmsg, appmsg and desc keyword arguments to set the exception message, application message and description for the pipeline stage, respectively. Additionally, the _prec and _transform abstract methods must be implemented to define the precondition and the effect of the new pipeline stage, respectively.

Fittable custom pipeline stages should implement, additionally to the _transform method, the _fit_transform method, which should both fit pipeline stage by the input dataframe and transform transform the dataframe, while also setting self.is_fitted = True.

5.2 Ad-Hoc Pipeline Stages

To create a custom pipeline stage without creating a proper new class, you can instantiate the AdHocStage class which takes a function in its transform constructor parameter to define the stage’s operation, and the optional prec parameter to define a precondition (an always-true function is the default).

6 Contributing

Package author and current maintainer is Shay Palachy (shay.palachy@gmail.com); You are more than welcome to approach him for help. Contributions are very welcomed, especially since this package is very much in its infancy and many other pipeline stages can be added. Intuit are nice.

6.1 Installing for development

Clone:

git clone git@github.com:shaypal5/pdpipe.git

Install in development mode with test dependencies:

cd pdpipe
pip install -e ".[test]"

6.2 Running the tests

To run the tests, use:

python -m pytest --cov=pdpipe

6.3 Adding documentation

This project is documented using the numpy docstring conventions, which were chosen as they are perhaps the most widely-spread conventions that are both supported by common tools such as Sphinx and result in human-readable docstrings (in my personal opinion, of course). When documenting code you add to this project, please follow these conventions.

Additionally, if you update this README.rst file, use python setup.py checkdocs to validate it compiles.

7 Credits

Created by Shay Palachy (shay.palachy@gmail.com).

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pdpipe-0.0.31.tar.gz (41.8 kB view details)

Uploaded Source

Built Distribution

pdpipe-0.0.31-py3-none-any.whl (30.7 kB view details)

Uploaded Python 3

File details

Details for the file pdpipe-0.0.31.tar.gz.

File metadata

  • Download URL: pdpipe-0.0.31.tar.gz
  • Upload date:
  • Size: 41.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.5.0 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.5

File hashes

Hashes for pdpipe-0.0.31.tar.gz
Algorithm Hash digest
SHA256 156b311660993d033170e1d7a4e24a152551a5127183a12f07d7dd37150fbe31
MD5 039d65f9898eabd95d94793d1f8163a8
BLAKE2b-256 a9682e9aa3b67134421ad44309b7695298f83b83b49cab19613430982afd7847

See more details on using hashes here.

File details

Details for the file pdpipe-0.0.31-py3-none-any.whl.

File metadata

  • Download URL: pdpipe-0.0.31-py3-none-any.whl
  • Upload date:
  • Size: 30.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.5.0 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.5

File hashes

Hashes for pdpipe-0.0.31-py3-none-any.whl
Algorithm Hash digest
SHA256 927f7df4202689eee79a06207ffd40a5be60a201c8bf1de4d51c862213f2d6a9
MD5 ddd4415dc1302da06f37415c9bfbaee5
BLAKE2b-256 7b3fe939b818e08f225743f32cefedc67bd5497cfb9315e4769ead3e5cfd9a1b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page