Skip to main content

A novel ensemble method for hard, axis-aligned decision trees learned end-to-end with gradient descent.

Project description

🌳 GRANDE: Gradient-Based Decision Tree Ensembles 🌳

🌳 GRANDE is a novel gradient-based decision tree ensemble method for tabular data!

🔍 What's new?

  • End-to-end gradient descent for tree ensembles.
  • Combines inductive bias of hard, axis-aligned splits with the flexibility of a gradient descent optimization.
  • Advanced instance-wise weighting to learn representations for both simple & complex relations in one model.

📝 Details on the method can be found in our paper available under: https://openreview.net/forum?id=XEFWBxi075

Cite us

@inproceedings{
marton2024grande,
title={{GRANDE}: Gradient-Based Decision Tree Ensembles},
author={Sascha Marton and Stefan L{\"u}dtke and Christian Bartelt and Heiner Stuckenschmidt},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=XEFWBxi075}
}

Usage

Please note that a GPU is required to achieve competitive runtimes. Also, please set 'objective' to 'binary', 'classification' or 'regression' based on your task.

Enable and specify GPU

import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
os.environ['TF_FORCE_GPU_ALLOW_GROWTH'] = 'true'

Load Data

from sklearn.model_selection import train_test_split
import openml

dataset = openml.datasets.get_dataset(40536)
X, y, categorical_indicator, attribute_names = dataset.get_data(target=dataset.default_target_attribute)
categorical_feature_indices = [idx for idx, idx_bool in enumerate(categorical_indicator) if idx_bool]

X_temp, X_test, y_temp, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

X_train, X_valid, y_train, y_valid = train_test_split(X_temp, y_temp, test_size=0.2, random_state=42)

Preprocessing, Hyperparameters and Training

GRANDE requires categorical features to be encoded appropriately. The best results are achieved using Leave-One-Out Encoding for high-cardinality categorical features and One-Hot Encoding for low-cardinality categorical features. Furthermore, all features should be normalized using a quantile transformation. Passing the categorical indices to the model wil automatically preprocess the data accordingly.

In the following, we will train the model using the default parameters. GRANDE already archives great results with its default parameters, but a HPO can increase the performance even further. An appropriate grid is specified in the model class.

from GRANDE import GRANDE

params = {
        'depth': 5, # tree depth
        'n_estimators': 2048, # number of estimators / trees

        'learning_rate_weights': 0.005, # learning rate for leaf weights
        'learning_rate_index': 0.01, # learning rate for split indices
        'learning_rate_values': 0.01, # learning rate for split values
        'learning_rate_leaf': 0.01, # learning rate for leafs (logits)

        'optimizer': 'adam', # optimizer
        'cosine_decay_steps': 0, # decay steps for lr schedule (CosineDecayRestarts)

        'loss': 'crossentropy', # loss function (default 'crossentropy' for binary & multi-class classification and 'mse' for regression)
        'focal_loss': False, # use focal loss {True, False}
        'temperature': 0.0, # temperature for stochastic re-weighted GD (0.0, 1.0)

        'from_logits': True, # use logits for weighting {True, False}
        'use_class_weights': True, # use class weights for training {True, False}

        'dropout': 0.0, # dropout rate (here, dropout randomly disables individual estimators of the ensemble during training)

        'selected_variables': 0.8, # feature subset percentage (0.0, 1.0)
        'data_subset_fraction': 1.0, # data subset percentage (0.0, 1.0)
}

args = {
    'epochs': 1_000, # number of epochs for training
    'early_stopping_epochs': 25, # patience for early stopping (best weights are restored)
    'batch_size': 64,  # batch size for training

    'cat_idx': categorical_feature_indices, # put list of categorical indices
    'objective': 'binary', # objective / task {'binary', 'classification', 'regression'}
    
    'random_seed': 42,
    'verbose': 1,       
}

model_grande = GRANDE(params=params, args=args)

model_grande.fit(X_train=X_train,
          y_train=y_train,
          X_val=X_valid,
          y_val=y_valid)

preds_grande = model_grande.predict(X_test)

Evaluate Model

preds = model_grande.predict(X_test)

if args['objective'] == 'binary':
    accuracy = sklearn.metrics.accuracy_score(y_test, np.round(preds_grande[:,1]))
    f1_score = sklearn.metrics.f1_score(y_test, np.round(preds_grande[:,1]), average='macro')
    roc_auc = sklearn.metrics.roc_auc_score(y_test, preds_grande[:,1], average='macro')
    
    print('Accuracy:', accuracy)
    print('F1 Score:', f1_score)
    print('ROC AUC:', roc_auc)
elif args['objective'] == 'classification':
    accuracy = sklearn.metrics.accuracy_score(y_test, np.argmax(preds_grande, axis=1))
    f1_score = sklearn.metrics.f1_score(y_test, np.argmax(preds_grande, axis=1), average='macro')
    roc_auc = sklearn.metrics.roc_auc_score(y_test, preds_grande, average='macro', multi_class='ovo', labels=[i for i in range(preds_grande.shape[1])])

    print('Accuracy GRANDE:', accuracy)
    print('F1 Score GRANDE:', f1_score)
    print('ROC AUC GRANDE:', roc_auc)
else:
    mean_absolute_error = sklearn.metrics.mean_absolute_error(y_test, np.round(preds_grande))
    r2_score = sklearn.metrics.r2_score(y_test, np.round(preds_grande))

    print('MAE GRANDE:', mean_absolute_error)
    print('R2 Score GRANDE:', r2_score)

More

Please note that this is an experimental implementation which is not fully tested yet. If you encounter any errors, or you observe unexpected behavior, please let me know.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

GRANDE-0.1.6.tar.gz (13.3 kB view details)

Uploaded Source

Built Distribution

GRANDE-0.1.6-py3-none-any.whl (13.6 kB view details)

Uploaded Python 3

File details

Details for the file GRANDE-0.1.6.tar.gz.

File metadata

  • Download URL: GRANDE-0.1.6.tar.gz
  • Upload date:
  • Size: 13.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for GRANDE-0.1.6.tar.gz
Algorithm Hash digest
SHA256 1bcead2d8143c038e43cdc2d36cae66f813da610318c7c9a8daca78f6e3da497
MD5 f0ab3831541c63069d8811db2bee02bf
BLAKE2b-256 36cd119408d6dadf53c4af3d26577c874e4276fd3e1c3ff8953532ef6e2af86e

See more details on using hashes here.

File details

Details for the file GRANDE-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: GRANDE-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 13.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for GRANDE-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 95e326f265332317d7946a9e1fa4a0b0e7f51d316fe7911cd2f4f97b4c2c32b4
MD5 7f7904a37e9249113bddd89629be799f
BLAKE2b-256 6d2e8bdc27a0e7a52151d2e44329af985cd4d9499ee13871a57a49b319bc672f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page