Skip to main content

MultiTrain allows you to train multiple machine learning algorthims on a dataset all at once to determine the best for that particular use case

Project description

PyPI PyPI - Downloads GitHub branch checks state Languages GitHub repo size GitHub issues GitHub closed issues GitHub pull requests GitHub closed pull requests GitHub GitHub Repo stars GitHub forks GitHub contributors

MultiTrain

MultiTrain is a python module for machine built with the aim of assisting you to find the machine learning model that works best on a particular dataset.

REQUIREMENTS

MultiTrain requires:

INSTALLATION

Install MultiTrain using:

pip install MultiTrain

USAGE

CLASSIFICATION

MultiClassifier

The MultiClassifier is a combination of many classifier estimators, each of which is fitted on the training data and returns assessment metrics such as accuracy, balanced accuracy, r2 score, f1 score, precision, recall, roc auc score for each of the models.

#This is a code snippet of how to import the MultiClassifier and the parameters contained in an instance

from MultiTrain import MultiClassifier
train = MultiClassifier(cores=-1, #this parameter works exactly the same as setting n_jobs to -1, this uses all the cpu cores to make training faster
                        random_state=42, #setting random state here automatically sets a unified random state across function imports
                        verbose=True, #set this to True to display the name of the estimators being fitted at a particular time
                        target_class='binary', #Recommended: set this to one of binary or multiclass to allow the library to adjust to the type of classification problem
                        imbalanced=True, #set this parameter to true if you are working with an imbalanced dataset
                        sampling='SMOTE', #set this parameter to any over_sampling, under_sampling or over_under_sampling methods if imbalanced is True
                        strategy='auto' #not all samplers use this parameters, the parameter is named as sampling_strategy for the samplers that support,
                                        #read more in the imbalanced learn documentation before using this parameter
                        )

In continuation of the code snippet above, if you're unsure about the various sampling techniques accessible after setting imbalanced to True when working on an imbalanced dataset, a code snippet is provided below to generate a list of all available sampling techniques.

from MultiTrain import MultiClassifier
train = MultiClassifier()
print(train.strategies()) #this line of codes returns all the under sampling, over sampling and over_under sampling methods available for use

Classifier Model Names

To return a list of all models available for training

from MultiTrain import MultiClassifier
train = MultiClassifier()
print(train.classifier_model_names())

Split

This function operates identically like the scikit-learn framework's train test split function. However, it has some extra features. For example, the split method is demonstrated in the code below.

import pandas as pd
from MultiTrain import MultiClassifier

train = MultiClassifier()
df = pd.read_csv("nameofFile.csv")

features = df.drop("nameOflabelcolum", axis = 1)
labels = df["nameOflabelcolum"]

split = train.split(X=features, 
                    y=labels, 
                    sizeOfTest=0.3, 
                    randomState=42)

If you want to run Principal Component Analysis on your dataset to reduce its dimensionality, You can achieve this with the split function. See the code excerpt below.

import pandas as pd
from MultiTrain import MultiClassifier #import the module

train = MultiClassifier()
df = pd.read_csv('NameOfFile.csv')

features = df.drop("nameOfLabelColumn", axis=1)
labels = df['nameOfLabelColumn']
pretend_columns = ['columnA', 'columnB', 'columnC']

#It's important to note that when using the split function, it must be assigned to a variable as it returns values.
split = train.split(X=features, #the features of the dataset
                    y=labels,   #the labels of the dataset
                    sizeOfTest=0.2, #same as test_size parameter in train_test_split
                    randomState=42, #initialize the value of the random state parameter
                    dimensionality_reduction=True, #setting to True enables this function to perform PCA on both X_train and X_test automatically after splitting
                    normalize='StandardScaler', #when using dimensionality_reduction, this must be set to one of StandardScaler,MinMaxScaler or RobustScaler if feature columns aren't scaled before a split
                    n_components=2, #when using dimensionality_reduction, this parameter must be set to define the number of components to keep.
                    columns_to_scale=pretend_columns #pass in a list of the columns in your dataset that you wish to scale 
                    ) 

Fit

Now that the dataset has been split using the split method, it is time to train on it using the fit method. Instead of the standard training in scikit-learn, catboost, or xgboost, this fit method integrates almost all available machine learning algorithms and trains them all on the dataset. It then returns a pandas dataframe including information such as which algorithm is overfitting, which algorithm has the greatest accuracy, and so on. A basic code example for using the fit function is shown below.

import pandas as pd
from MultiTrain import MultiClassifier

train = MultiClassifier()
df = pd.read_csv('file.csv')

features = df.drop("nameOflabelcolumn", axis = 1)
labels = df["nameOflabelcolumn"]

split = train.split(X=features, 
                    y=labels, 
                    sizeOfTest=0.3, 
                    randomState=42,
                    strat=True,
                    shuffle_data=True)

fit = train.fit(X=features,
                y=labels,
                splitting=True,
                split_data=split)

Now, we would be looking at the various ways the fit method can be implemented.

If you used the traditional train_test_split method available in scikit-learn

import pandas as pd
from sklearn.model_selection import train_test_split
from MultiTrain import MultiClassifier
train = MultiClassifier()

df = pd.read_csv('filename.csv')

features = df.drop('labelName', axis=1)
labels = df['labelName']

X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.2, random_state=42)
fit = train.fit(X_train=X_train, 
              X_test=X_test, 
              y_train=y_train, 
              y_test=y_test, 
              split_self=True, #always set this to true if you used the traditional train_test_split
              show_train_score=True, #only set this to true if you want to compare train equivalent of all the metrics shown on the dataframe
              return_best_model=True, #setting this to True means that you'll get a dataframe containing only the best performing model
              excel=True #when this parameter is set to true, an spreadsheet report of the training is stored in your current working directory
              ) 

If you used the split method provided by the MultiClassifier

import pandas as pd
from MultiTrain import MultiClassifier

train = MultiClassifier()
df = pd.read_csv('filename.csv')

features = df.drop('labelName', axis=1)
labels = df['labelName']

split = train.split(X=features,
                    y=labels,
                    sizeOfTest=0.2,
                    randomState=42,
                    shuffle_data=True)

fit = train.fit(splitting=True,
                split_data=split,
                show_train_score=True,
                excel=True)     

If you want to train on your dataset with KFold

import pandas as pd
from MultiTrain import MultiClassifier

train = MultiClassifier()
df = pd.read_csv('filename.csv')

features = df.drop('labelName', axis=1)
labels = df['labelName']

fit = train.fit(X=features,
                y=labels,
                kf=True, #set this to true if you want to train on your dataset with KFold
                fold=5, #you can adjust this to use any number of folds you want for kfold, higher numbers leads to higher training times
                show_train_score=True,
                excel=True)     

After training on your dataset, it is only normal that you'd want to make use of the best algorithm based on a specific metric. A method is also provided for you to do this easily. Using the code snippet above - after training, to use the best algorithm based on it's name


Or else if you want to automatically select the best algorithm based on a particular metric of your choice


REGRESSION


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

MultiTrain-0.1.13.tar.gz (25.3 kB view details)

Uploaded Source

Built Distribution

MultiTrain-0.1.13-py3-none-any.whl (26.6 kB view details)

Uploaded Python 3

File details

Details for the file MultiTrain-0.1.13.tar.gz.

File metadata

  • Download URL: MultiTrain-0.1.13.tar.gz
  • Upload date:
  • Size: 25.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.7

File hashes

Hashes for MultiTrain-0.1.13.tar.gz
Algorithm Hash digest
SHA256 440a2b9dedab462b50e1dd4d9a3f902b1a8a6470bc54b533d5c15fe107b91cc9
MD5 b664ee04e6463e96b37a82e144b80b6f
BLAKE2b-256 4e09cea7c12b94072068f3e5a31c618670a7335365141bf19bff4f60d469f0c0

See more details on using hashes here.

File details

Details for the file MultiTrain-0.1.13-py3-none-any.whl.

File metadata

  • Download URL: MultiTrain-0.1.13-py3-none-any.whl
  • Upload date:
  • Size: 26.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.7

File hashes

Hashes for MultiTrain-0.1.13-py3-none-any.whl
Algorithm Hash digest
SHA256 fd247648dcc6993db194b907f12daa1fcc117af73284cae1807d78e2f280a3b9
MD5 fcc5f37173ae62f02106485164293996
BLAKE2b-256 9c3a5d5a83aac8bc5ddad2007b70f8b19f39c06f39a39284b4663063dc16da00

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page