Skip to main content

Atlantic: Automated Preprocessing Framework for Supervised Machine Learning

Project description

Contributors Stargazers MIT License LinkedIn Downloads Month Downloads


Atlantic - Automated Data Preprocessing Framework for Supervised Machine Learning

Framework Contextualization

The Atlantic project constitutes an comprehensive and objective approach to simplify and automate data processing through the integration and objectively validated application of various preprocessing mechanisms, ranging from feature engineering, automated feature selection, multiple encoding versions and null imputation methods. The optimization methodology of this framework follows a evaluation structured in tree based models ensembles.

This project aims at providing the following application capabilities:

  • General applicability on tabular datasets: The developed preprocessing procedures are applicable on multiple domains associated with Supervised Machine Learning, regardless of the properties or specifications of the data.

  • Automated treatment of tabular data associated with predictive analysis: It implements a global and carefully validated data processing based on the characteristics of the data input columns.

  • Robustness and improvement of predictive results: The implementation of the atlantic automated data preprocessing pipeline aims at improving predictive performance directly associated with the processing methods implemented based on the data properties.

Main Development Tools

Major frameworks used to built this project:

Framework Architecture

Where to get it

Binary installer for the latest released version is available at the Python Package Index (PyPI).

The source code is currently hosted on GitHub at: https://github.com/TsLu1s/Atlantic

Installation

To install this package from Pypi repository run the following command:

pip install atlantic

Usage Examples

1. Atlantic - Automated Data Preprocessing Pipeline

In order to be able to apply the automated preprocessing atlantic pipeline you need first to import the package. The following needed step is to load a dataset, split it and define your to be predicted target column name into the variable target. You can customize the fit_processing method by altering the following running pipeline parameters:

  • split_ratio: Division ratio (Train\Validation) in which the preprocessing methods will be evaluated within the loaded Dataset.
  • relevance: Minimal value of the total sum of relative feature importance percentage selected in the H2O AutoML feature selection step.
  • h2o_fs_models: Quantity of models generated for competition in step H2O AutoML feature selection to evaluate the relative importance of each feature (only leaderboard model is selected for evaluation).
  • encoding_fs: You can choose if you want to enconde categorical features in order to reduce loading time in H2O AutoML feature selection step.
  • vif_ratio: This value defines the minimal threshold for Variance Inflation Factor filtering (default value=10).

Once the data fitting process is complete, you can automaticaly optimize preprocessing transformations on all future dataframes with the same properties using the data_processing method.

Importante Notes:

  • Default predictive evaluation metric for regression contexts is Mean Absolute Error and classification is Accuracy.
  • Although functional, Atlantic data processing is not optimized for big data purposes yet.
import pandas as pd
from sklearn.model_selection import train_test_split
from atlantic.pipeline import Atlantic
import warnings
warnings.filterwarnings("ignore", category=Warning) # -> For a clean console
    
data = pd.read_csv('csv_directory_path', encoding='latin', delimiter=',') # Dataframe Loading Example

train,test = train_test_split(data, train_size = 0.8)
test,future_data = train_test_split(test, train_size = 0.6)

# Resetting Index is Required
train = train.reset_index(drop=True)
test = test.reset_index(drop=True)
future_data = future_data.reset_index(drop=True)

future_data.drop(columns=["Target_Column"], inplace=True) # Drop Target

### Fit Data Processing

atl = Atlantic(X = train,                # X:pd.DataFrame, target:str="Target_Column"
               target = "Target Column")    

atl.fit_processing(split_ratio = 0.75,   # split_ratio:float=0.75, relevance:float=0.99 [0.5,1]
                   relevance = 0.99,     # h2o_fs_models:int [1,100], encoding_fs:bool=True\False
                   h2o_fs_models = 7,    # vif_ratio:float=10.0 [3,30]
                   encoding_fs = True,
                   vif_ratio = 10.0)

### Transform Data Processing

train = atl.data_processing(X = train)
test = atl.data_processing(X = test)
future_data = atl.data_processing(X = future_data)

### Export Atlantic Preprocessing Metadata

import dill as pickle
with open('atl.pkl', 'wb') as output:
    pickle.dump(atl, output)
    

2. Atlantic - Preprocessing Data

2.1 Encoding Versions

There are multiple preprocessing methods available to direct use. This package provides upgrated encoding LabelEncoder, OneHotEncoder and InverseFrequency (IDF based) methods with an automatic multicolumn application.

import pandas as pd
from sklearn.model_selection import train_test_split 
from atlantic.processing.encoders import AutoLabelEncoder, AutoIFrequencyEncoder, AutoOneHotEncoder

train,test = train_test_split(data, train_size=0.8)
train,test = train.reset_index(drop=True), test.reset_index(drop=True) # Required

target = "Target_Column" # -> target feature name
    
cat_cols = [col for col in data.select_dtypes(include=['object']).columns if col != target]

### Encoders
## Create Label Encoder
encoder = AutoLabelEncoder()
## Create InverseFrequency Encoder
encoder = AutoIFrequencyEncoder()
## Create One-hot Encoder
encoder = AutoOneHotEncoder()

## Fit
encoder.fit(train[cat_cols])

# Transform the DataFrame using Label\IF\One-hot Encoding
train = encoder.transform(X = train)
test = encoder.transform(X = test)

# Perform an inverse transform to convert it back the original categorical columns values
train = encoder.inverse_transform(X = train)
test = encoder.inverse_transform(X = test)
            

2.2 Feature Selection Methods

You can get filter your most valuable features from the dataset via this 2 feature selection methods:

  • H2O AutoML Feature Selection - This method is based of variable importance evaluation and calculation for tree-based models in H2Os AutoML and it can be customized by use of the following parameters:

    • relevance: Minimal value of the total sum of relative variable\feature importance percentage selected.
    • h2o_fs_models: Quantity of models generated for competition to evaluate the relative importance of each feature (only leaderboard model will be selected for evaluation).
    • encoding_fs: You can choose if you want to encond your features in order to reduce loading time. If in True mode label encoding is applied to categorical features.
  • VIF Feature Selection (Variance Inflation Factor) - Variance inflation factor aims at measuring the amount of multicollinearity in a set of multiple regression variables or features, therefore for this filtering method to be applied all input variables need to be of numeric type. It can be customized by changing the column filtering treshold vif_threshold designated with a default value of 10.

from atlantic.feature_selection.selector import Selector

fs = Selector(X = train, target = "Target_Column")

cols_vif = fs.feature_selection_vif(vif_threshold = 10.0)   # X: Only numerical values allowed & No nans allowed in VIF

selected_cols, selected_importance = fs.feature_selection_h2o(relevance = 0.99,     # relevance:float [0.5,1], h2o_fs_models:int [1,100]
                                                              h2o_fs_models = 7,    # encoding_fs:bool=True/False
                                                              encoding_fs = True)

2.3 Null Imputation Auxiliar Methods

Simplified and automated multivariate null imputation methods based from Sklearn are also provided and applicable, as following:

## Simplified Null Imputation (Only numeric features)
from atlantic.imputers.imputation import AutoSimpleImputer, AutoKNNImputer, AutoIterativeImputer

# Example usage of AutoSimpleImputer
simple_imputer = AutoSimpleImputer(strategy = 'mean')
simple_imputer.fit(train)  # Fit on the Train DataFrame
df_imputed = simple_imputer.transform(train.copy())  # Transform the Train DataFrame
df_imputed_test = simple_imputer.transform(test.copy()) # Transform the Test DataFrame

# Example usage of AutoKNNImputer
knn_imputer = AutoKNNImputer(n_neighbors = 3,
                             weights = "uniform")
knn_imputer.fit(train)  # Fit on the Train DataFrame
df_imputed = knn_imputer.transform(train.copy())  # Transform the Train DataFrame
df_imputed_test = knn_imputer.transform(test.copy()) # Transform the Test DataFrame

# Example usage of AutoIterativeImputer
iterative_imputer = AutoIterativeImputer(max_iter = 10, 
                                         random_state = 0, 
                                         initial_strategy = "mean", 
                                         imputation_order = "ascending")
iterative_imputer.fit(train)  # Fit on the Train DataFrame
df_imputed = iterative_imputer.transform(train.copy())  # Transform the Train DataFrame
df_imputed_test = iterative_imputer.transform(test.copy()) # Transform the Test DataFrame

Citation

Feel free to cite Atlantic as following:


@article{SANTOS2023100532,
  author = {Luis Santos and Luis Ferreira}
  title = {Atlantic - Automated data preprocessing framework for supervised machine learning},
  journal = {Software Impacts},
  volume = {17},
  year = {2023},
  issn = {2665-9638},
  doi = {http://dx.doi.org/10.1016/j.simpa.2023.100532},
  url = {https://www.sciencedirect.com/science/article/pii/S2665963823000696}
}

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Luis Santos - LinkedIn

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

atlantic-1.1.70-py3-none-any.whl (29.7 kB view details)

Uploaded Python 3

File details

Details for the file atlantic-1.1.70-py3-none-any.whl.

File metadata

  • Download URL: atlantic-1.1.70-py3-none-any.whl
  • Upload date:
  • Size: 29.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.16

File hashes

Hashes for atlantic-1.1.70-py3-none-any.whl
Algorithm Hash digest
SHA256 5a4e1d885d08d4b85326bd5c5260c697478e97740a01b848432a0385aa711889
MD5 14733add4389d4f0bfe7d1ab51d36831
BLAKE2b-256 b27871cad6c32658ea5eaa7c8dce8c5730ec1fe631ef6d9c8630f0bc825a7691

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page