Bridging gaps between other machine learning and deep learning tools and making robust, post-mortem analysis possible.
Project description
GapLearn
GapLearn bridges gaps between other machine learning and deep learning tools. All models can be passed to the functions below regardless of the framework they were built upon (scikit-learn, tensorflow, xgboost, or even good ole numpy).
My first package objective is to bring transparency to the often black-boxy model training process by making by making robust post-mortem analysis of the hyperparameter and feature selection processes possible. The functions below also further automate some processes while still giving the user full control of the results.
Many features are on their way. Unit tests and full documentation will be added shortly.
The source code is available on GitHub
Installation
pip install gaplearn
See the latest version on PyPI
Submodules
cv
The cv
submodule will have these classes (sfs has been released):
sfs
Description:
- This is a sequential feature selector that enables you to perform backwards elimination with any model (not just a linear regression).
- At each step, the feature that has the lowest permutation importance is selected for removal. The permutation importance is measured as the decrease in accuracy by default, but the user can pass any custom scoring function.
Improvements on their way:
- Add forward selection and all subsets testing
- Add more built-in scoring functions to for assessing feature permutation importance
- Create custom permutation scoring method to remove
eli5
as a dependency - Enable a user to get the feature set that matches a certain criteria ("has n features", "has score > x")
Methods
backwards_elimination(X, y, model = 'logit', params = {}, fit_function = None, predict_function = None, score_function = None, score_name = 'accuracy', cols = [], verbose = 0)
- Run the backwards elimination
- Params:
- X: (DataFrame or matrix) with independent variables
- y: (iterable) dependent corresponding variable values
- params: (dict) parameter set for model
- model: (str or custom model type) if str ('logit' or 'rfc'), a corresponding sci-kit learn model will be used; if custom model type, the model you pass will be used
- fit_function: (function) the function that will be used to train the model; the function must accept the parameters
model
,X
, andy
; if this value is not set,backwards_elimination
will attempt to use your model'sfit
method - predict_function: (function) the function that will be used to make predictions with the model; the function must accept the parameters
model
, andX
; if this value is not set,backwards_elimination
will attempt to use your model'spredict
method - score_function: (function) the function that will be used to score the model and determine the feature permutation importance; the function must accept the parameters
y
andpreds
; if this value is not set, the accuracy will be used - score_name: (str) name of the score calculated by the
score_function
; 'accuracy' by default
Example:
import pandas as pd
from gaplearn.cv import sfs
#### Perform a backwards elimination with sci-kit learn's random forest model ####
X = pd.read_csv('X_classification.csv')
y = pd.read_csv('y_classification.csv')
fs_rfc = sfs()
print('The backwards elimination has been run: {}'.format(fs_rfc.be_complete))
fs_rfc.backwards_elimination(X, y, model = 'rfc', params = {'n_jobs': -1})
# Get the step-by-step summary
summary_rfc = fs_rfc.get_summary_be() # Alternatively, `summary_rfc = fs_rfc.summary_be`
# Get the predictions and true values for each observation
results_rfc = fs_rfc.get_results_be() # Alternatively, `results_rfc = fs_rfc.results_be
# Get the features used in the analysis
features_rfc = fs_rfc.features_be # Alternatives, `sorted(list(results_rfc['feature to remove']))
# Identify which feature set can achieve at least 85% accuracy with the smallest number of features
summary_rfc[summary_rfc['overall accuracy'] > .85]
#### Perform a more complex backwards elimination with sci-kit learn's naive bayes model ####
from sklearn.linear_model import SGDRegressor
model_sgd = SGDRegressor(loss = 'modified_huber', penalty = 'elasticnet')
X = pd.read_csv('X_regression.csv')
y = pd.read_csv('y_regression.csv')
fs_nb = sfs()
def mse(y, preds):
score = sum([(preds[i] - y[i]) ** 2 for i in range(y.shape[0])]) / y.shape[0]
return score
def arbitrary_prediction(model, X):
preds = model.predict(X) + 1 # arbitrarily deciding to add 1 to the prediction (realistically, this would be a wrapper for model that don't have a `fit` method)
return preds
fs_nb.backwards_elimination(X, y, model = model_sgd, predict_function = arbitrary_prediction, score_function = mse)
# Get the step-by-step summary
summary_nb = fs_nb.get_summary_be() # Alternatively, `summary = fs_nb.summary_be`
# Get the predictions and true values for each observation
results_nb = fs_nb.get_results_be() # Alternatively, `results = fs_nb.results_be
# Get the features used in the analysis
features_nb = fs_nb.features_be # Alternatives, `sorted(list(results['feature to remove']))
param_search_cluster (in development)
Description:
- This is a hyperparameter grid/random search for clustering algorithms
- Unlike other grid/random search algorithms, this one enables you to get the observation-by-observation results from each parameter set so that you can do deep post-mortem analysis of the grid/random search.
param_search (in development)
Description:
- This is a hyperparameter grid/random search for both regression algorithms and classification algorithms
- Unlike other grid/random search algorithms, this one enables you to get the observation-by-observation results from each parameter set so that you can do deep post-mortem analysis of the grid/random search.
data_eng
The data_eng
submodule will have these classes (sfs has been released):
distributed_sql (in development)
Description:
- This enables users to chunk multi-parameter sql queries and process them on multiple threads.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file awtest-1.4.tar.gz
.
File metadata
- Download URL: awtest-1.4.tar.gz
- Upload date:
- Size: 10.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.6.2 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/3.6.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4d40439ecb1f602c58a0fc4efb4fd31c6773d50ec52893c3bff596ca072133ad |
|
MD5 | 865ae70afa69ab050384654ae8df0ac0 |
|
BLAKE2b-256 | e3372c9b099f2c8eda6930498f0878abad33f2bc9d37f6a79d7f4bf97790b7c5 |