Skip to main content

Interactive cleaning for pandas DataFrames

Project description

Jupyter notebook extension and python library for interactive cleaning of pandas DataFrames with a selection of techniques, from simple replacements of missing values to imputation with a Restricted Boltzmann Machine.

Installation

pip install ipydataclean
jupyter nbextension enable dataclean --py --sys-prefix

Usage

Use your Jupyter notebook as normal. When a pandas DataFrame is present in your python kernel you should see a new notification on the Data Cleaner icon in your toolbar. DataFrames with names beginning with an underscore will be ignored.

Data Cleaner toolbar icon.

Data Cleaner toolbar icon.

Clicking on the icon will open a floating window containing a summary of the DataFrames in your kernel. Clicking on the name of one of these DataFrames will show some of the Data Cleaner controls and some summary statistics on the DataFrame columns.

Data Cleaner window.

Data Cleaner window.

Clicking on the name of one of these columns will show data cleaning tools specific to that column, with a histogram or bar chart showing the distribution of these values. As you create a step the effect that this will have on the data distribution is shown as a preview.

Creating a data cleaning step on a column.

Creating a data cleaning step on a column.

You can also choose to fill in missing and mistyped values in your DataFrame with a Restricted Boltzmann Machine. This uses the boltzmannclean package.

Creating a Restricted Boltzmann Machine cleaning step.

Creating a Restricted Boltzmann Machine cleaning step.

Once you create your steps they are added to a processing pipeline which can be viewed in the “Pipeline” widget.

A data cleaning pipeline.

A data cleaning pipeline.

These steps can be modified or deleted using these controls, and when ready the pipeline can be executed on the dataframe or output to code. Executing your pipeline will create a new DataFrame with the suffix “_cleaned” in your kernel, while exporting will create a new code cell in your notebook defining a python function which will carry out the pipeline cleaning steps.

An exported pipeline.

An exported pipeline.

Caveats

Duplicated or non string column names are not supported.

For DataFrames over 1000 rows, a sample of 1000 rows will be used for previewing and creating your processing pipeline, with the whole DataFrame only operated on when the pipeline is executed.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ipydataclean-0.2.2.tar.gz (42.5 kB view details)

Uploaded Source

File details

Details for the file ipydataclean-0.2.2.tar.gz.

File metadata

  • Download URL: ipydataclean-0.2.2.tar.gz
  • Upload date:
  • Size: 42.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/42.0.1.post20191125 requests-toolbelt/0.9.1 tqdm/4.39.0 CPython/3.6.9

File hashes

Hashes for ipydataclean-0.2.2.tar.gz
Algorithm Hash digest
SHA256 2f2b350b954fb6dbbe4356fc2b2496f917313e899937cd1e6436853ee9ebea1d
MD5 258eb0686593a5d8ee1000e3d87e1dba
BLAKE2b-256 d9e4f8d904e5db061d6ca1d67b12d721bbb1a2a0e88b2279b233ea4b88c42447

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page