Skip to main content

Interactive cleaning for pandas DataFrames

Project description

Jupyter notebook extension and python library for interactive cleaning of pandas DataFrames with a selection of techniques, from simple replacements of missing values to imputation with a Restricted Boltzmann Machine.

Installation

pip install sherlockml-dataclean
jupyter nbextension enable dataclean --py --sys-prefix

Usage

Use your Jupyter notebook as normal. When a pandas DataFrame is present in your python kernel you should see a new notification on the Data Cleaner icon in your toolbar. DataFrames with names beginning with an underscore will be ignored.

Data Cleaner toolbar icon.

Data Cleaner toolbar icon.

Clicking on the icon will open a floating window containing a summary of the DataFrames in your kernel. Clicking on the name of one of these DataFrames will show some of the Data Cleaner controls and some summary statistics on the DataFrame columns.

Data Cleaner window.

Data Cleaner window.

Clicking on the name of one of these columns will show data cleaning tools specific to that column, with a histogram or bar chart showing the distribution of these values. As you create a step the effect that this will have on the data distribution is shown as a preview.

Creating a data cleaning step on a column.

Creating a data cleaning step on a column.

You can also choose to fill in missing and mistyped values in your DataFrame with a Restricted Boltzmann Machine. This uses the sherlockml-boltzmannclean package.

Creating a Restricted Boltzmann Machine cleaning step.

Creating a Restricted Boltzmann Machine cleaning step.

Once you create your steps they are added to a processing pipeline which can be viewed in the “Pipeline” widget.

A data cleaning pipeline.

A data cleaning pipeline.

These steps can be modified or deleted using these controls, and when ready the pipeline can be executed on the dataframe or output to code. Executing your pipeline will create a new DataFrame with the suffix “_cleaned” in your kernel, while exporting will create a new code cell in your notebook defining a python function which will carry out the pipeline cleaning steps.

An exported pipeline.

An exported pipeline.

Caveats

Duplicated or non string column names are not supported.

For DataFrames over 1000 rows, a sample of 1000 rows will be used for previewing and creating your processing pipeline, with the whole DataFrame only operated on when the pipeline is executed.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sherlockml-dataclean-0.2.0.tar.gz (42.8 kB view hashes)

Uploaded source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page