Multi-target Random Forest implementation that can mix both classification and regression tasks.
Project description
morfist: mixed-output-rf
Multi-target Random Forest implementation that can mix both classification and regression tasks.
Morfist implements the Random Forest algorithm (Breiman, 2001) with support for mixed-task multi-task learning, i.e., it is possible to train the model on any number of classification tasks and regression tasks, simultaneously. Morfist's mixed multi-task learning implementation follows that proposed by Linusson (2013).
- Breiman, L. (2001). Random forests. Machine learning, 45(1), 5-32.
- Linusson, H. (2013). Multi-output random forests.
Installation
With pip:
pip install decision-tree-morfist
With conda:
conda install -c systemallica decision-tree-morfist
Usage
Initialising the model
- Similarly to a scikit-learn RandomForestClassifier, a MixedRandomForest can be initialised in this way:
from morfist import MixedRandomForest
mrf = MixedRandomForest(
n_estimators=n_trees,
min_samples_leaf=1,
classification_targets=[0]
)
- The available parameters are:
-
n_estimators(int): the number of trees in the forest. Optional. Default value: 10.
-
max_features(int | float | str): the number of features to consider when looking for the best split. Optional. Default value: 'sqrt'.
- If int, then consider max_features features at each split.
- If float, then max_features is a fraction and int(max_features * n_features) features are considered at each split.
- If “sqrt”, then max_features=sqrt(n_features) (same as “auto”).
- If “log2”, then max_features=log2(n_features).
- If None, then max_features=n_features.
Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than max_features features.
-
min_samples_leaf(int): the minimum number of samples required to be at a leaf node. Optional. Default value: 5.
Note: A split point at any depth will only be considered if it leaves at least min_samples_leaf training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression.
-
choose_split(str): method used to find the best split. Optional. Default value: 'mean'.
By default, the mean information gain will be used.
- Possible values:
- 'mean': the mean information gain is used.
- 'max': the maximum information gain is used.
- Possible values:
-
classification_targets(int[]): features that are part of the classification task. Optional. Default value: None.
If no classification_targets are specified, the random forest will treat all variables as regression variables.
-
Training the model
- Once the model is initialised, it can be fitted like this:
Where X are the training examples and Y are their respective labels(if they are categorical) or values(if they are numerical)mrf.fit(X, y)
Prediction
- The model can be now used to predict new instances.
- Class/value:
mrf.predict(x)
- Probability:
mrf.predict_proba(x)
TODO:
- Speed up the learning algorithm implementation (morfist is currently much slower than the Random Forest implementation available in scikit-learn)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for decision-tree-morfist-0.1.1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | f45b654aac95d3b6a7f0c644c1e61632f91dcd3bc4397bf70a8bfb9b3041a576 |
|
MD5 | 8b123692ead4155944a392490e8b6d1a |
|
BLAKE2b-256 | bb70b0605628e2a9e53127e45f18400efbbdf25f37f8a09b29a8455334c33729 |
Hashes for decision_tree_morfist-0.1.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 77c144015ad064775ad85c6595d419aa35642c5d791770db44bfdaf73c939ddb |
|
MD5 | 13078644a0b061d0f8322d6371d1a316 |
|
BLAKE2b-256 | b643d41e0c05906dea8ae239bca3f07cae662370f5dc3d50d1742acc57816561 |