Clustering-based over-sampling.
Project description
cluster-over-sampling
Implementation of a general interface for clustering based over-sampling algorithms as described in [1], [2]. It is compatible with scikit-learn and imbalanced-learn.
Instructions
Installation documentation, API documentation, and examples can be found on the documentation.
Dependencies
cluster-over-sampling is tested to work under Python 3.6+. The dependencies are the following:
numpy(>=1.1)
scikit-learn(>=0.21)
imbalanced-learn(>=0.6.2)
Optional dependencies for SOMO and Geometric SOMO are the following:
som-learn(>=0.1.1)
geometric-smote(>=0.1.3)
Additionally, to run the examples, you need matplotlib(>=2.0.0) and pandas(>=0.22).
Installation
cluster-over-sampling is currently available on the PyPi’s repository and you can install it via pip:
pip install -U cluster-over-sampling
The package is released also in Anaconda Cloud platform:
conda install -c algowit cluster-over-sampling
If you prefer, you can clone it and run the setup.py file. Use the following commands to get a copy from GitHub and install all dependencies:
git clone https://github.com/AlgoWit/cluster-over-sampling.git cd cluster-over-sampling pip install .
Or install using pip and GitHub:
pip install -U git+https://github.com/AlgoWit/cluster-over-sampling.git
Testing
After installation, you can use pytest to run the test suite:
make test
About
If you use cluster-over-sampling in a scientific publication, we would appreciate citations to any of the following papers:
@article{Douzas2017, doi = {10.1016/j.eswa.2017.03.073}, url = {https://doi.org/10.1016/j.eswa.2017.03.073}, year = {2017}, month = oct, publisher = {Elsevier {BV}}, volume = {82}, pages = {40--52}, author = {Georgios Douzas and Fernando Bacao}, title = {Self-Organizing Map Oversampling ({SOMO}) for imbalanced data set learning}, journal = {Expert Systems with Applications} } @article{Douzas2018, doi = {10.1016/j.ins.2018.06.056}, url = {https://doi.org/10.1016/j.ins.2018.06.056}, year = {2018}, month = oct, publisher = {Elsevier {BV}}, volume = {465}, pages = {1--20}, author = {Georgios Douzas and Fernando Bacao and Felix Last}, title = {Improving imbalanced learning through a heuristic oversampling method based on k-means and {SMOTE}}, journal = {Information Sciences} }
Learning from class-imbalanced data continues to be a common and challenging problem in supervised learning as standard classification algorithms are designed to handle balanced class distributions. While different strategies exist to tackle this problem, methods which generate artificial data to achieve a balanced class distribution are more versatile than modifications to the classification algorithm. SMOTE algorithm [3], as well as any other over-sampling method based on the SMOTE mechanism, generates synthetic samples along line segments that join minority class instances. SMOTE addresses only the issue of between-classes imbalance. On the other hand, by clustering the input space and applying any over-sampling algorithm for each resulting cluster with appropriate resampling ratio, the within-classes imbalanced issue can be addressed. SOMO [1] and KMeans-SMOTE [2] are specific realizations of this approach.
References:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for cluster-over-sampling-0.2.3.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | f529654b23989e2539ce6b9fa877dc4dacb028a1fbbb0541855e52fefc3cc975 |
|
MD5 | 8aa0b6c24c7fd6b8b250aa55f95fe049 |
|
BLAKE2b-256 | 62686b6f8f7254408f527dc14f6ef28821ba49420a56f807736e4d8f5b8275ec |
Hashes for cluster_over_sampling-0.2.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | bef84fc83057fa69f0436269e86a228dac3a10759c1168c68e9f2b2c4ef1d696 |
|
MD5 | cf9549378f2d7db721608e7a3f5edd84 |
|
BLAKE2b-256 | 69bf01dc0a494da7706711b6368eed3a54216dffd8962bb7001912d922d6e4bd |