Skip to main content

Oblique decision tree with svm nodes

Project description

CI codecov Codacy Badge Language grade: Python PyPI version https://img.shields.io/badge/python-3.8%2B-blue DOI

STree

Oblique Tree classifier based on SVM nodes. The nodes are built and splitted with sklearn SVC models. Stree is a sklearn estimator and can be integrated in pipelines, grid searches, etc.

Stree

Installation

pip install git+https://github.com/doctorado-ml/stree

Documentation

Can be found in stree.readthedocs.io

Examples

Jupyter notebooks

  • benchmark Benchmark

  • features Some features

  • Gridsearch Gridsearch

  • Ensemble Ensembles

Hyperparameters

Hyperparameter Type/Values Default Meaning
* C <float> 1.0 Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive.
* kernel {"liblinear", "linear", "poly", "rbf", "sigmoid"} linear Specifies the kernel type to be used in the algorithm. It must be one of ‘liblinear’, ‘linear’, ‘poly’ or ‘rbf’. liblinear uses liblinear library and the rest uses libsvm library through scikit-learn library
* max_iter <int> 1e5 Hard limit on iterations within solver, or -1 for no limit.
* random_state <int> None Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False.
Pass an int for reproducible output across multiple function calls
max_depth <int> None Specifies the maximum depth of the tree
* tol <float> 1e-4 Tolerance for stopping criterion.
* degree <int> 3 Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels.
* gamma {"scale", "auto"} or <float> scale Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’.
if gamma='scale' (default) is passed then it uses 1 / (n_features * X.var()) as value of gamma,
if ‘auto’, uses 1 / n_features.
split_criteria {"impurity", "max_samples"} impurity Decides (just in case of a multi class classification) which column (class) use to split the dataset in a node**. max_samples is incompatible with 'ovo' multiclass_strategy
criterion {“gini”, “entropy”} entropy The function to measure the quality of a split (only used if max_features != num_features).
Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain.
min_samples_split <int> 0 The minimum number of samples required to split an internal node. 0 (default) for any
max_features <int>, <float>

or {“auto”, “sqrt”, “log2”}
None The number of features to consider when looking for the split:
If int, then consider max_features features at each split.
If float, then max_features is a fraction and int(max_features * n_features) features are considered at each split.
If “auto”, then max_features=sqrt(n_features).
If “sqrt”, then max_features=sqrt(n_features).
If “log2”, then max_features=log2(n_features).
If None, then max_features=n_features.
splitter {"best", "random", "trandom", "mutual", "cfs", "fcbf", "iwss"} "random" The strategy used to choose the feature set at each node (only used if max_features < num_features). Supported strategies are: “best”: sklearn SelectKBest algorithm is used in every node to choose the max_features best features. “random”: The algorithm generates 5 candidates and choose the best (max. info. gain) of them. “trandom”: The algorithm generates only one random combination. "mutual": Chooses the best features w.r.t. their mutual info with the label. "cfs": Apply Correlation-based Feature Selection. "fcbf": Apply Fast Correlation-Based Filter. "iwss": IWSS based algorithm
normalize <bool> False If standardization of features should be applied on each node with the samples that reach it
* multiclass_strategy {"ovo", "ovr"} "ovo" Strategy to use with multiclass datasets, "ovo": one versus one. "ovr": one versus rest

* Hyperparameter used by the support vector classifier of every node

** Splitting in a STree node

The decision function is applied to the dataset and distances from samples to hyperplanes are computed in a matrix. This matrix has as many columns as classes the samples belongs to (if more than two, i.e. multiclass classification) or 1 column if it's a binary class dataset. In binary classification only one hyperplane is computed and therefore only one column is needed to store the distances of the samples to it. If three or more classes are present in the dataset we need as many hyperplanes as classes are there, and therefore one column per hyperplane is needed.

In case of multiclass classification we have to decide which column take into account to make the split, that depends on hyperparameter split_criteria, if "impurity" is chosen then STree computes information gain of every split candidate using each column and chooses the one that maximize the information gain, otherwise STree choses the column with more samples with a predicted class (the column with more positive numbers in it).

Once we have the column to take into account for the split, the algorithm splits samples with positive distances to hyperplane from the rest.

Tests

python -m unittest -v stree.tests

License

STree is MIT licensed

Reference

R. Montañana, J. A. Gámez, J. M. Puerta, "STree: a single multi-class oblique decision tree based on support vector machines.", 2021 LNAI 12882, pg. 54-64

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

STree-1.2.3.tar.gz (23.1 kB view details)

Uploaded Source

Built Distribution

STree-1.2.3-py3-none-any.whl (25.2 kB view details)

Uploaded Python 3

File details

Details for the file STree-1.2.3.tar.gz.

File metadata

  • Download URL: STree-1.2.3.tar.gz
  • Upload date:
  • Size: 23.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/53.0.0 requests-toolbelt/0.9.1 tqdm/4.58.0 CPython/3.9.7

File hashes

Hashes for STree-1.2.3.tar.gz
Algorithm Hash digest
SHA256 a1c11d8b5779b3bb80dd4eeb3f06dcc714f767c109931b529a5a86ba51e10ec6
MD5 a1cfddc6a1e8f9cf0e60c6c948ef247d
BLAKE2b-256 0da578aaee1875e0656891ae5846ab3ac8eaae50f5746cef8a9403e7a32d35bd

See more details on using hashes here.

Provenance

File details

Details for the file STree-1.2.3-py3-none-any.whl.

File metadata

  • Download URL: STree-1.2.3-py3-none-any.whl
  • Upload date:
  • Size: 25.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/53.0.0 requests-toolbelt/0.9.1 tqdm/4.58.0 CPython/3.9.7

File hashes

Hashes for STree-1.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 771dbb886db01f24ad84111d4e9fe285b50410ff59f830b3e62b7c7a91d2a0d0
MD5 666805d28c025a73a0d2c3552374d807
BLAKE2b-256 2170673a265b308d24d798db8bbb68120291dc5b216dddc79d3fddfec054c3dc

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page