Skip to main content
Help us improve Python packaging – donate today!

Scikit-learn Wrapper for Regularized Greedy Forest

Project Description

Build Status Travis Build Status AppVeyor License Python Versions PyPI Version


The wrapper of machine learning algorithm Regularized Greedy Forest (RGF) [1] for Python.


Scikit-learn interface and possibility of usage for multiclass classification problem.

rgf_python contains both vanilla RGF from the paper [1] and FastRGF [2] implementations.

Note that FastRGF is developed to be used with large (and sparse) datasets, so on small datasets it often shows poorer performance compared to vanilla RGF.

Original RGF implementations are available only for regression and binary classification, but rgf_python is also available for multiclass classification by “One-vs-Rest” method.


from sklearn import datasets
from sklearn.utils.validation import check_random_state
from sklearn.model_selection import StratifiedKFold, cross_val_score
from rgf.sklearn import RGFClassifier

iris = datasets.load_iris()
rng = check_random_state(0)
perm = rng.permutation( =[perm] =[perm]

rgf = RGFClassifier(max_leaf=400,

n_folds = 3

rgf_scores = cross_val_score(rgf,

rgf_score = sum(rgf_scores)/n_folds
print('RGF Classfier score: {0:.5f}'.format(rgf_score))

More examples of using RGF estimators could be found here.

Examples of using FastRGF estimators could be found here.

Software Requirements

  • Python (2.7 or >= 3.4)
  • scikit-learn (>= 0.18)


From PyPI using pip:

pip install rgf_python

or from GitHub:

git clone --recursive
cd rgf_python
python install

If you have any problems while installing by methods listed above, you should build RGF and FastRGF executable files from binaries on your own and place compiled executable files into directory which is included in environmental variable ‘PATH’ or into directory with installed package. Alternatively, you may specify actual locations of executable files and directory for placing temp files by corresponding flags in configuration file .rgfrc, which you should create into your home directory. The default values are platform dependent: for Windows exe_location=$HOME/rgf.exe, fastrgf_location=$HOME, temp_location=$HOME/temp/rgf and for others exe_location=$HOME/rgf, fastrgf_location=$HOME, temp_location=/tmp/rgf. Here is the example of .rgfrc file:

exe_location=C:/Program Files/RGF/bin/rgf.exe
fastrgf_location=C:/Program Files/FastRGF/bin
temp_location=C:/Program Files/RGF/temp

Note that while exe_location should point to a concrete RGF executable file, fastrgf_location should point to a folder in which forest_train.exe and forest_predict.exe FastRGF executable files are located.

Also, you may directly specify installation without automatic compilation:

pip install rgf_python --install-option=--nocompilation


git clone --recursive
cd rgf_python
python install --nocompilation

sudo (or administrator privileges in Windows) may be needed to perform commands.

Here is the guide how you can build executable files from binaries. The file for RGF will be in rgf_python/include/rgf/bin folder and files for FastRGF will appear in rgf_python/include/fast_rgf/bin folder.

RGF Compilation

Precompiled file

The easiest way. Just take precompiled file from rgf_python/include/rgf/bin. For Windows 32-bit rename rgf32.exe to rgf.exe and take it.

Visual Studio (existing solution)
  1. Open directory rgf_python/include/rgf/Windows/rgf.
  2. Open rgf.sln file with Visual Studio and choose BUILD -> Build Solution (Ctrl+Shift+B). If you are asked to upgrade solution file after opening it click OK. If you have errors about Platform Toolset go to PROJECT -> Properties -> Configuration Properties -> General and select the toolset installed on your machine.
MinGW (existing makefile)

Build executable file with MinGW g++ from existing makefile (you may want to customize this file for your environment).

cd rgf_python/include/rgf/build
CMake and Visual Studio

Create solution file with CMake and then compile with Visual Studio.

cd rgf_python/include/rgf/build
cmake ../ -G "Visual Studio 10 2010"
cmake --build . --config Release

If you are compiling on 64-bit machine then add Win64 to the end of generator’s name: Visual Studio 10 2010 Win64. We tested following versions of Visual Studio:

  • Visual Studio 10 2010 [Win64]
  • Visual Studio 11 2012 [Win64]
  • Visual Studio 12 2013 [Win64]
  • Visual Studio 14 2015 [Win64]
  • Visual Studio 15 2017 [Win64]

Other versions may work but are untested.

CMake and MinGW

Create makefile with CMake and then compile with MinGW.

cd rgf_python/include/rgf/build
cmake ../ -G "MinGW Makefiles"
cmake --build . --config Release
g++ (existing makefile)

Build executable file with g++ from existing makefile (you may want to customize this file for your environment).

cd rgf_python/include/rgf/build

Create makefile with CMake and then compile.

cd rgf_python/include/rgf/build
cmake ../
cmake --build . --config Release

FastRGF Compilation

Note that compilation only with g++-5 and newer versions is possible. Other compilers are unsupported and older versions are produces corrupted files.

CMake and MinGW-w64

On Windows compilation only with MinGW-w64 is supported because only this version provides POSIX threads.

cd rgf_python/include/fast_rgf/build
cmake .. -G "MinGW Makefiles"
mingw32-make install
cd rgf_python/include/fast_rgf/build
cmake ..
make install
Docker image

We provide docker image with installed rgf_python.

# Run docker image
docker run -it fukatani/rgf_python /bin/bash
# Run RGF example
python ./rgf_python/examples/RGF/
# Run FastRGF example
python ./rgf_python/examples/FastRGF/

Tuning Hyper-parameters


You can tune hyper-parameters as follows.

  • max_leaf: Appropriate values are data-dependent and usually varied from 1000 to 10000.
  • test_interval: For efficiency, it must be either multiple or divisor of 100 (default value of the optimization interval).
  • algorithm: You can select “RGF”, “RGF Opt” or “RGF Sib”.
  • loss: You can select “LS”, “Log”, “Expo” or “Abs”.
  • reg_depth: Must be no smaller than 1. Meant for being used with algorithm = “RGF Opt” or “RGF Sib”.
  • l2: Either 1, 0.1, or 0.01 often produces good results though with exponential loss (loss = “Expo”) and logistic loss (loss = “Log”), some data requires smaller values such as 1e-10 or 1e-20.
  • sl2: Default value is equal to l2. On some data, l2/100 works well.
  • normalize: If turned on, training targets are normalized so that the average becomes zero.
  • min_samples_leaf: Smaller values may slow down training. Too large values may degrade model accuracy.
  • n_iter: Number of iterations of coordinate descent to optimize weights.
  • n_tree_search: Number of trees to be searched for the nodes to split. The most recently grown trees are searched first.
  • opt_interval: Weight optimization interval in terms of the number of leaf nodes.
  • learning_rate: Step size of Newton updates used in coordinate descent to optimize weights.

Detailed instruction of tuning hyper-parameters is here.


  • n_estimators: Typical range is [100, 10000], and a typical value is 1000.
  • max_depth: Controls the tree depth.
  • max_leaf: Controls the tree size.
  • tree_gain_ratio: Controls when to start a new tree.
  • min_samples_leaf: Controls the tree growth process.
  • loss: You can select “LS”, “MODLS” or “LOGISTIC”.
  • l1: Typical range is [0, 1000], and a large value induces sparsity.
  • l2: Use a relatively large value such as 1000 or 10000. The larger value is, the larger n_estimators you need to use: the resulting accuracy is often better with a longer training time.
  • opt_algorithm: You can select “rgf” or “epsilon-greedy”.
  • learning_rate: Step size of epsilon-greedy boosting. Meant for being used with opt_algorithm = “epsilon-greedy”.
  • max_bin: Typical range for dense data is [10, 65000] and for sparse data is [10, 250].
  • min_child_weight: Controls the process of discretization (creating bins).
  • data_l2: Controls the degree of L2 regularization for discretization (creating bins).
  • sparse_max_features: Typical range is [1000, 10000000]. Meant for being used with sparse data.
  • sparse_min_occurences: Controls which feature will be selected. Meant for being used with sparse data.

Using at Kaggle Kernels

Kaggle Kernels support rgf_python. Please see this page.


If you meet any error, please try to run to confirm successful package installation.

Then feel free to open new issue.

Known Issues

  • FastRGF crashes if training dataset is too small (#data < 28). (rgf_python#92)
  • rgf_python does not provide any built-in method to calculate feature importances. (rgf_python#109)


  • Q: Temporary files use too much space on my hard drive (Kaggle Kernels disc space is exhausted while fitting rgf_python model).

    A: Please see rgf_python#75.

  • Q: GridSearchCV/RandomizedSearchCV/RFECV or other scikit-learn tool with n_jobs parameter hangs/freezes/crashes when runs with rgf_python estimator.

    A: This is a known general problem of multiprocessing in Python. You should set n_jobs=1 parameter of either estimator or scikit-learn tool.


rgf_python is distributed under the GNU General Public License v3 (GPLv3). Please read file LICENSE for more information.

rgf_python includes RGF version 1.2 which is distributed under the GPLv3. Original CLI implementation of RGF you can download at

rgf_python includes FastRGF version 0.5 which is distributed under the MIT license. Original CLI implementation of FastRGF you can download at

Many thanks to Rie Johnson and Tong Zhang (the authors of RGF).


Shamelessly, some part of the implementation is based on the following code. Thanks!

Release history Release notifications

This version
History Node


History Node


History Node


History Node


History Node


History Node


History Node


History Node


History Node


History Node


History Node


History Node


History Node


History Node


History Node


History Node


History Node


Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging CloudAMQP CloudAMQP RabbitMQ AWS AWS Cloud computing Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page