Skip to main content

A package for synthetic data generation for imputation using single and multiple imputation methods.

Project description

ML-Impute

A python package for synthetic data generation using single and multiple imputation.

Ml-Impute is a library for generating synthetic data for null-value imputation, notably with the ability to handle mixed datatypes. This package is based off of the research of Audigier, Husson, and Josse and their method of iterative factor analysis for singular data imputation.
The goal of this package is to (a) provide an open source package for use of this method in Python for the first time, and (b) to provide an efficient parallelization of the algorithm when extending it to both single and multiple imputation.

Note: I am currently a university student and may not have the time to continue to release updates and changes as fast as some other packages might. In the spirit of open-source code, please feel free to add pull requests or open a new issue if you have bug fixes or improvements. Thank you for your understanding and for your contributions.


Table of Contents


Installation

ML-Impute is currently available on Test-PyPi.

Unix/Mac OS

pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ ml-impute

Windows

py -m pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ ml-impute

Usage

Currently, ML-Impute can handle both single and multiple imputation.

We will demonstrate both methods using the titanic example-dataset available in sklearn.datasets openml. The following subsections provide an overview into each method along with their usage information.

To use the package post-installation via pip, instantiate the following object as follows:

from mpute import generator

gen = generator.Generator()

Generator.generate(self, dataframe, encode_cols, exclude_cols, max_iter, tol, explained_var, method, n_versions, noise)

Parameter Description
dataframe (required) Pandas dataframe object
encode_cols (optional, default=[]) Categorical columns to be encoded.
By default, ml-impute will encode all columns with object or category dtypes. However, many datasets contain numerical categorical data (ex/ Likert scales, classification types, etc.) that should be encoded.
exclude_cols (optional, default=[]) Categorical columns to be excluded from encoding and/or imputation.
On occastion, datasets will contain unique non-ordinal data (such as unique IDs) that, if encoded, will lead to large increases in memory usage and runtime. These columns should be excluded.
max_iter (optional, default=1000) The maximum number of iterations of imputation before exit.
tol (optional, default=1e-4) Tolerance bound for convergence.
If Frobenius norm relative error is < tol before max_iter is reached, exit.
explained_var (optional, default=0.95) Percentage of the total variance kept when reconstructing the dataframe after performing Singular Value Decomposition.
method (optional, default="single") Specification for use of single or multiple imputation method.
Possible values: ["single", "multiple"]
n_versions (optional, default=20) If performing multiple imputation, the number of generated dataframes.
If performing singular imputation, n_versions=1
noise (optional, default="gaussian") If performing multiple impuation, specify the type of noise added to each generated dataset to create variation. Gaussian noise is centered around 0 with a standard deviation of 0.1.
If performing singular imputation, noise=None
Method Return Value
"single" imputed_df: a copy of the dataframe argument with synthetic data imputed for all null values
"multiple" df_dict: a dictionary containing each of the n_versions of generated datasets with variable synthetic data.
keys: [0, n_versions)
values: [dataframes]

For the subsequent examples for single and multiple imputation, we build the titanic dataset as follows:

from sklearn import datasets

titanic, target = datasets.fetch_openml("titanic", version=1, as_frame=True, return_X_y=True)
titanic['survived'] = target

Single Imputation

Single imputation works with the following line:

imputed_df = gen.generate(dataframe, exclude_cols=['name'])

Note: 'name' is excluded as it is a non-sparse column containing unique identifiers, therefore unnecessary for imputation and if encoded, would result in a significant increase in memory usage.

Multiple Imputation

Multiple imputation is as simple as the following:

imputed_dfs = gen.generate(dataframe, exclude_cols=['name'], method="multiple")

License

ML-Impute is published under the MIT License. Please see the LICENSE file for more information.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ml-impute-0.0.4.tar.gz (8.8 kB view hashes)

Uploaded Source

Built Distribution

ml_impute-0.0.4-py3-none-any.whl (8.9 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page