Toolbox for imbalanced dataset in machine learning.

## imbalanced-learn

imbalanced-learn is a python package offering a number of re-sampling techniques commonly used in datasets showing strong between-class imbalance. It is compatible with scikit-learn and is part of scikit-learn-contrib projects.

### Documentation

Installation documentation, API documentation, and examples can be found on the documentation.

### Installation

#### Dependencies

imbalanced-learn is tested to work under Python 2.7 and Python 3.5, and 3.6. The dependency requirements are based on the last scikit-learn release:

• scipy(>=0.13.3)

• numpy(>=1.8.2)

• scikit-learn(>=0.19.0)

Additionally, to run the examples, you need matplotlib(>=2.0.0).

#### Installation

imbalanced-learn is currently available on the PyPi’s repository and you can install it via pip:

pip install -U imbalanced-learn

The package is release also in Anaconda Cloud platform:

conda install -c glemaitre imbalanced-learn

If you prefer, you can clone it and run the setup.py file. Use the following commands to get a copy from GitHub and install all dependencies:

git clone https://github.com/scikit-learn-contrib/imbalanced-learn.git
cd imbalanced-learn
pip install .

Or install using pip and GitHub:

pip install -U git+https://github.com/scikit-learn-contrib/imbalanced-learn.git

#### Testing

After installation, you can use nose to run the test suite:

make coverage

### Development

The development of this scikit-learn-contrib is in line with the one of the scikit-learn community. Therefore, you can refer to their Development Guide.

If you use imbalanced-learn in a scientific publication, we would appreciate citations to the following paper:

@article{JMLR:v18:16-365,
author  = {Guillaume  Lema{{\^i}}tre and Fernando Nogueira and Christos K. Aridas},
title   = {Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning},
journal = {Journal of Machine Learning Research},
year    = {2017},
volume  = {18},
number  = {17},
pages   = {1-5},
url     = {http://jmlr.org/papers/v18/16-365}
}

Most classification algorithms will only perform optimally when the number of samples of each class is roughly the same. Highly skewed datasets, where the minority is heavily outnumbered by one or more classes, have proven to be a challenge while at the same time becoming more and more common.

One way of addressing this issue is by re-sampling the dataset as to offset this imbalance with the hope of arriving at a more robust and fair decision boundary than you would otherwise.

Re-sampling techniques are divided in two categories:
1. Under-sampling the majority class(es).

2. Over-sampling the minority class.

3. Combining over- and under-sampling.

4. Create ensemble balanced sets.

Below is a list of the methods currently implemented in this module.

• Under-sampling
1. Random majority under-sampling with replacement

2. Extraction of majority-minority Tomek links [1]

3. Under-sampling with Cluster Centroids

4. NearMiss-(1 & 2 & 3) [2]

5. Condensend Nearest Neighbour [3]

6. One-Sided Selection [4]

7. Neighboorhood Cleaning Rule [5]

8. Edited Nearest Neighbours [6]

9. Instance Hardness Threshold [7]

10. Repeated Edited Nearest Neighbours [14]

11. AllKNN [14]

• Over-sampling
1. Random minority over-sampling with replacement

2. SMOTE - Synthetic Minority Over-sampling Technique [8]

3. bSMOTE(1 & 2) - Borderline SMOTE of types 1 and 2 [9]

4. SVM SMOTE - Support Vectors SMOTE [10]

• Over-sampling followed by under-sampling
1. SMOTE + Tomek links [12]

2. SMOTE + ENN [11]

• Ensemble sampling
1. EasyEnsemble [13]

The different algorithms are presented in the sphinx-gallery.

## Project details

Uploaded source
Uploaded 3 6