Generalized Multiclass Support Vector Machines
Project description
GenSVM Python Package
This is the Python package for the GenSVM multiclass classifier by Gerrit J.J. van den Burg and Patrick J.F. Groenen.
Useful links:
 PyGenSVM on GitHub
 PyGenSVM on PyPI
 Package documentation
 Journal paper: GenSVM: A Generalized Multiclass Support Vector Machine JMLR, 17(225):1−42, 2016.
 There is also an R package
 Or you can directly use the C library
Installation
Before GenSVM can be installed, a working NumPy installation is required. so GenSVM can be installed using the following command:
$ pip install numpy && pip install gensvm
If you encounter any errors, please open an issue on GitHub. Don't hesitate, you're helping to make this project better!
Citing
If you use this package in your research please cite the paper, for instance using the following BibTeX entry::
@article{JMLR:v17:14526,
author = {{van den Burg}, G. J. J. and Groenen, P. J. F.},
title = {{GenSVM}: A Generalized Multiclass Support Vector Machine},
journal = {Journal of Machine Learning Research},
year = {2016},
volume = {17},
number = {225},
pages = {142},
url = {http://jmlr.org/papers/v17/14526.html}
}
Usage
The package contains two classes to fit the GenSVM model: GenSVM and GenSVMGridSearchCV. These classes respectively fit a single GenSVM model or fit a series of models for a parameter grid search. The interface to these classes is the same as that of classifiers in ScikitLearn so users familiar with ScikitLearn should have no trouble using this package. Below we will show some examples of using the GenSVM classifier and the GenSVMGridSearchCV class in practice.
In the examples we assume that we have loaded the iris dataset from ScikitLearn as follows:
>>> from sklearn.datasets import load_iris
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.preprocessing import MaxAbsScaler
>>> X, y = load_iris(return_X_y=True)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y)
>>> scaler = MaxAbsScaler().fit(X_train)
>>> X_train, X_test = scaler.transform(X_train), scaler.transform(X_test)
Note that we scale the data using the
MaxAbsScaler
function. This scales the columns of the data matrix to [1, 1]
without
breaking sparsity. Scaling the dataset can have a significant effect on the
computation time of GenSVM and is generally recommended for
SVMs.
Example 1: Fitting a single GenSVM model
Let's start by fitting the most basic GenSVM model on the training data:
>>> from gensvm import GenSVM
>>> clf = GenSVM()
>>> clf.fit(X_train, y_train)
GenSVM(coef=0.0, degree=2.0, epsilon=1e06, gamma='auto', kappa=0.0,
kernel='linear', kernel_eigen_cutoff=1e08, lmd=1e05,
max_iter=100000000.0, p=1.0, random_state=None, verbose=0,
weights='unit')
With the model fitted, we can predict the test dataset:
>>> y_pred = clf.predict(X_test)
Next, we can compute a score for the predictions. The GenSVM class has a
score
method which computes the
accuracy_score
for the predictions. In the GenSVM paper, the adjusted Rand
index is often
used to compare performance. We illustrate both options below (your results
may be different depending on the exact train/test split):
>>> clf.score(X_test, y_test)
1.0
>>> from sklearn.metrics import adjusted_rand_score
>>> adjusted_rand_score(clf.predict(X_test), y_test)
1.0
We can try this again by changing the model parameters, for instance we can
turn on verbosity and use the Euclidean norm in the GenSVM model by setting p = 2
:
>>> clf2 = GenSVM(verbose=True, p=2)
>>> clf2.fit(X_train, y_train)
Starting main loop.
Dataset:
n = 112
m = 4
K = 3
Parameters:
kappa = 0.000000
p = 2.000000
lambda = 0.0000100000000000
epsilon = 1e06
iter = 0, L = 3.4499531579689533, Lbar = 7.3369415851139745, reldiff = 1.1266786095824437
...
Optimization finished, iter = 4046, loss = 0.0230726364692517, rel. diff. = 0.0000009998645783
Number of support vectors: 9
GenSVM(coef=0.0, degree=2.0, epsilon=1e06, gamma='auto', kappa=0.0,
kernel='linear', kernel_eigen_cutoff=1e08, lmd=1e05,
max_iter=100000000.0, p=2, random_state=None, verbose=True,
weights='unit')
For other parameters that can be tuned in the GenSVM model, see GenSVM.
Example 2: Fitting a GenSVM model with a "warm start"
One of the key features of the GenSVM classifier is that training can be
accelerated by using socalled "warmstarts". This way the optimization can be
started in a location that is closer to the final solution than a random
starting position would be. To support this, the fit
method of the GenSVM
class has an optional seed_V
parameter. We'll illustrate how this can be
used below.
We start with relatively large value for the epsilon
parameter in the
model. This is the stopping parameter that determines how long the
optimization continues (and therefore how exact the fit is).
>>> clf1 = GenSVM(epsilon=1e3)
>>> clf1.fit(X_train, y_train)
...
>>> clf1.n_iter_
163
The n_iter_
attribute tells us how many iterations the model did. Now, we
can use the solution of this model to start the training for the next model:
>>> clf2 = GenSVM(epsilon=1e8)
>>> clf2.fit(X_train, y_train, seed_V=clf1.combined_coef_)
...
>>> clf2.n_iter_
3196
Compare this to a model with the same stopping parameter, but without the warm start:
>>> clf2.fit(X_train, y_train)
...
>>> clf2.n_iter_
3699
So we saved about 500 iterations! This effect will be especially significant with large datasets and when you try out many parameter configurations. Therefore this technique is built into the GenSVMGridSearchCV class that can be used to do a grid search of parameters.
Example 3: Running a GenSVM grid search
Often when we're fitting a machine learning model such as GenSVM, we have to try several parameter configurations to figure out which one performs best on our given dataset. This is usually combined with cross validation to avoid overfitting. To do this efficiently and to make use of warm starts, the GenSVMGridSearchCV class is available. This class works in the same way as the GridSearchCV class of ScikitLearn, but uses the GenSVM C library for speed.
To do a grid search, we first have to define the parameters that we want to vary and what values we want to try:
>>> from gensvm import GenSVMGridSearchCV
>>> param_grid = {'p': [1.0, 2.0], 'lmd': [1e8, 1e6, 1e4, 1e2, 1.0], 'kappa': [0.9, 0.0] }
For the values that are not varied in the parameter grid, the default values
will be used. This means that if you want to change a specific value (such as
epsilon
for instance), you can add this to the parameter grid as a
parameter with a single value to try (e.g. 'epsilon': [1e8]
).
Running the grid search is now straightforward:
>>> gg = GenSVMGridSearchCV(param_grid)
>>> gg.fit(X_train, y_train)
GenSVMGridSearchCV(cv=None, iid=True,
param_grid={'p': [1.0, 2.0], 'lmd': [1e06, 0.0001, 0.01, 1.0], 'kappa': [0.9, 0.0]},
refit=True, return_train_score=True, scoring=None, verbose=0)
Note that if we have set refit=True
(the default), then we can use the
GenSVMGridSearchCV instance to predict or score using the best estimator
found in the grid search:
>>> y_pred = gg.predict(X_test)
>>> gg.score(X_test, y_test)
1.0
A nice feature borrowed from ScikitLearn
_ is that the results from the grid
search can be represented as a pandas
DataFrame:
>>> from pandas import DataFrame
>>> df = DataFrame(gg.cv_results_)
This can make it easier to explore the results of the grid search.
Known Limitations
The following are known limitations that are on the roadmap for a future release of the package. If you need any of these features, please vote on them on the linked GitHub issues (this can make us add them sooner!).
 Support for sparse matrices. NumPy supports sparse matrices, as does the GenSVM C library. Getting them to work together requires some additional effort. In the meantime, if you really want to use sparse data with GenSVM (this can lead to significant speedups!), check out the GenSVM C library.
 Specification of class misclassification weights. Currently, incorrectly classification an object from class A to class C is as bad as incorrectly classifying an object from class B to class C. Depending on the application, this may not be the desired effect. Adding class misclassification weights can solve this issue.
Questions and Issues
If you have any questions or encounter any issues with using this package, please ask them on GitHub.
License
This package is licensed under the GNU General Public License version 3.
Copyright (c) G.J.J. van den Burg, excluding the sections of the code that are explicitly marked to come from ScikitLearn.
Project details
Release history Release notifications  RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Hashes for gensvm0.2.7cp38cp38manylinux2010_x86_64.whl
Algorithm  Hash digest  

SHA256  40d177479ccfcd1b277aa840ff047668aaa8e6c79d6480f2f2b06d9cb78595cf 

MD5  b86cd42684d03cc0d21b9530cc0aae77 

BLAKE2b256  cbb847227391868a26db5a9cb6f36599e9042005882e1b37279a3399ae1067a4 
Hashes for gensvm0.2.7cp38cp38manylinux2010_i686.whl
Algorithm  Hash digest  

SHA256  5defaf0a62c971252fd31b98ca54e5b3e14882a7ec4f095aaf6d2b528ef6959a 

MD5  84e92e7c85a400eb16f8b0fd64b436cf 

BLAKE2b256  4533584195d72c1a9826b0a03176306df7db2bfbdf5b02be80366da62cfce5e7 
Hashes for gensvm0.2.7cp38cp38macosx_10_14_x86_64.whl
Algorithm  Hash digest  

SHA256  a11ee0edf220bf27a05bc08187ada7395c2648550a4eabe6161980d82d2d7c07 

MD5  330df0886999e4aaa231b56236ebd9c2 

BLAKE2b256  fa22e1d531fda468968470382ad3653cbee3b5cc3a7945b5eb7928ac7c1f38ef 
Hashes for gensvm0.2.7cp37cp37mmanylinux2010_x86_64.whl
Algorithm  Hash digest  

SHA256  3b6709b3a0d3d2f6549847059bc3ac433dfaa6ca7cf12091fae5a9927c1a5676 

MD5  c888e35b654033a336a081721e630952 

BLAKE2b256  e154ecb6abcf4fa4d6e6064d8d106efb6eaad499ecc870ffe7c2072a107ccfb1 
Hashes for gensvm0.2.7cp37cp37mmanylinux2010_i686.whl
Algorithm  Hash digest  

SHA256  7089a0212f76302ba6fe7292c32d6be6a5fb5e2e8c60dad4c9e465e03d426809 

MD5  241b62c4ad1a777f09d968054d0f2ecb 

BLAKE2b256  bda2c0f4f428a48dc55c9b3bdcad70cd72bc9e8fcb3a92f255f12fae678369ea 
Hashes for gensvm0.2.7cp37cp37mmacosx_10_14_intel.whl
Algorithm  Hash digest  

SHA256  0e8fa7174f0e22f64512744954f84e4a8b0e86846fed0cd9725633259e3c7150 

MD5  4e680d75db887fe4a6928df8521f36c7 

BLAKE2b256  cc7e6d3092f4a0e502e9af0cad86e183cac08827c428302b023575f78246768f 
Hashes for gensvm0.2.7cp36cp36mmanylinux2010_x86_64.whl
Algorithm  Hash digest  

SHA256  f15153006d8c23c6e5b0f3c030accc33b481a128b3cc6bcdbcf8adf5aba5452b 

MD5  9362be3dc3947ac2502f900a2699a44b 

BLAKE2b256  3a1b8e2d6efdae5c1d403e065ffb7d4242565d97569d39640c8ed240060adfb8 
Hashes for gensvm0.2.7cp36cp36mmanylinux2010_i686.whl
Algorithm  Hash digest  

SHA256  c8aaf4d898eaab81a7f24fea6a5c8f683253e103dce1f73795ca76821c590a3d 

MD5  758ffe1c8de84a40ce98d51e7baebb7d 

BLAKE2b256  999e4705b5f7a7ac45db6471d874e40028b5d840fcd95220ac28961a0c3e94e7 
Hashes for gensvm0.2.7cp36cp36mmacosx_10_14_intel.whl
Algorithm  Hash digest  

SHA256  7e6a4bb8771e63cc3409a2ddc7669e95183888ce5b779d4b0f7e6727c585a023 

MD5  c7efa5ea7c3c0eabe83157ed76099858 

BLAKE2b256  3d99c94ecdf34ee7a8c200621a1980e0ddc377662700acc422ffe1b7170c9996 