Skip to main content

MLimputer - Null Imputation Framework for Supervised Machine Learning

Project description


MLimputer - Null Imputation Framework for Supervised Machine Learning

Framework Contextualization

The MLimputer project constitutes an complete and integrated pipeline to automate the handling of missing values in Datasets through regression prediction and aims at reducing bias and increase the precision of imputation results when compared to more classic imputation methods. This package provides multiple algorithm options to impute your data (shown bellow), in which every observed data column with existing missing values is fitted with a robust preprocessing approach and subsequently predicted.

The architecture design includes three main sections, these being: missing data analysis, data preprocessing and predictive model imputation which are organized in a pipeline structure.

This project aims at providing the following application capabilities:

  • General applicability on tabular datasets: The developed imputation procedures are applicable on any data table associated with any Supervised ML scopes, based on missing data columns to be imputed.

  • Robustness and improvement of predictive results: The application of the MLimputer preprocessing aims at improve the predictive performance through optimized imputation of existing missing values in the Dataset input columns.

Main Development Tools

Major frameworks used to built this project:

Where to get it

Binary installer for the latest released version is available at the Python Package Index (PyPI).

Installation

To install this package from Pypi repository run the following command:

pip install mlimputer

Usage Examples

The first needed step after importing the package is to load a dataset (split it) and define your choosen imputation model infit_imput function. The imputation model options for handling the missing data in your dataset are the following:

  • RandomForest
  • ExtraTrees
  • GBR
  • KNN
  • XGBoost
  • Lightgbm
  • Catboost

After fitting your imputation model, you can load the imputer variable into fit_configs parameter in the transform_imput function. From there you can impute the future datasets (validate, test ...) with the same data properties. Note, as it shows in the example bellow, you can also customize your model imputer parameters by changing it's configurations and then, implementing them in the imputer_configs function parameter.

Through the cross_validation function you can also compare the predictive performance evalution of multiple imputations, allowing you to validate which imputation model fits better your future predictions.

Importante Notes:

  • The actual version of this package does not incorporate the imputing of categorical values, just the automatic handling of numeric missing values is implemented.
import mlimputer as mli
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
import warnings
warnings.filterwarnings("ignore", category=Warning) #-> For a clean console

data = pd.read_csv('csv_directory_path') # Dataframe Loading Example

train, test= train_test_split(data, train_size=0.8)
train,test=train.reset_index(drop=True), test.reset_index(drop=True) # <- Required

# All model imputation options ->  "RandomForest","ExtraTrees","GBR","KNN","XGBoost","Lightgbm","Catboost"

# Model Imputer Customization
parameters_=mli.imputer_parameters()

# Customizing parameters settings
parameters_["RandomForest"]["n_estimators"]=40
parameters_["KNN"]["n_neighbors"]=5
print(parameters_)
    
# Imputation Example 1 : RandomForest

imputer_rf=mli.fit_imput(Dataset=train,imput_model="RandomForest",imputer_configs=parameters_)
train_rf=mli.transform_imput(Dataset=train,fit_configs=imputer_rf)
test_rf=mli.transform_imput(Dataset=test,fit_configs=imputer_rf)

# Imputation Example 2 : KNN

imputer_knn=mli.fit_imput(Dataset=train,imput_model="KNN",imputer_configs=parameters_)
train_knn=mli.transform_imput(Dataset=train,fit_configs=imputer_knn)
test_knn=mli.transform_imput(Dataset=test,fit_configs=imputer_knn)
    
#(...)
    
## Performance Evaluation Example - Imputation CrossValidation

from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
from catboost import CatBoostRegressor
        
leaderboard_knn_imp=mli.cross_validation(Dataset=train_knn,
                                         target="Target_Name_Col", 
                                         test_size=0.2,
                                         n_splits=3,
                                         models=[LinearRegression(), RandomForestRegressor(), CatBoostRegressor()])

## Export Imputation Metadata

# KNN Imputation Metadata
import pickle 
output = open("imputer_knn.pkl", 'wb')
pickle.dump(imputer_knn, output)

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Luis Santos - LinkedIn

Feel free to contact me and share your feedback.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

mlimputer-0.1.2-py3-none-any.whl (9.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page