Skip to main content

Rust-first gradient boosting for regression, classification, and ranking with time-aware validation and Python bindings

Project description

AlloyGBM

AlloyGBM is a Rust-first gradient boosting library with Python bindings, supporting regression, binary and multi-class classification, and learning-to-rank. It is built for fast native execution, deterministic training, and time-aware tabular workflows.

AlloyGBM is strongest on panel and finance-style problems where leakage-aware validation and practical iteration speed matter. It also performs competitively on general tabular benchmarks and includes native artifact prediction, TreeSHAP explanations, and purged time-series split helpers.

When To Use AlloyGBM

AlloyGBM is a good fit when you want:

  • a native Rust-backed gradient boosting library with regression, classification, and ranking
  • deterministic CPU training and inference
  • sklearn-compatible estimators (GBMRegressor, GBMClassifier, GBMRanker)
  • time-aware validation helpers for forecasting or panel-style workflows
  • native prediction from serialized artifacts
  • TreeSHAP explanations and global feature importances
  • NaN/missing value support out of the box
  • model persistence via pickle, save/load, or artifact export

Installation

PyPI:

pip install alloygbm

From source:

python -m pip install --upgrade maturin
maturin develop --manifest-path bindings/python/Cargo.toml --release

AlloyGBM targets Python 3.11+ and uses a native Rust extension module.

Wheel targets for 0.5.0:

  • macOS arm64
  • Linux x86_64 (manylinux)
  • source distribution for other platforms

Quick Examples

Regression

from alloygbm import GBMRegressor, rmse

model = GBMRegressor(
    learning_rate=0.05,
    max_depth=6,
    n_estimators=1200,
    deterministic=True,
    seed=7,
)
model.fit(X_train, y_train, eval_set=(X_valid, y_valid))
print(rmse(y_test, model.predict(X_test)))

Binary Classification

from alloygbm import GBMClassifier, accuracy, log_loss

model = GBMClassifier(
    learning_rate=0.05,
    max_depth=6,
    n_estimators=500,
    deterministic=True,
    seed=7,
)
model.fit(X_train, y_train)

labels = model.predict(X_test)            # [0, 1, 1, 0, ...]
probas = model.predict_proba(X_test)      # [[P(0), P(1)], ...]

print("accuracy:", accuracy(y_test, labels))
print("log_loss:", log_loss(y_test, probas[:, 1]))

Learning-to-Rank

from alloygbm import GBMRanker, ndcg

model = GBMRanker(
    ranking_objective="rank:ndcg",
    learning_rate=0.05,
    max_depth=6,
    n_estimators=300,
    deterministic=True,
    seed=7,
)
model.fit(X_train, y_train, group=query_ids_train)

scores = model.predict(X_test)
print("NDCG@10:", ndcg(y_test, scores, group=query_ids_test, k=10))

MorphBoost (Adaptive Split Criterion)

MorphBoost is an opt-in training mode that blends the standard gradient gain with a normalized information-theoretic term. Across rounds, the blend ramps in via a tanh(iter/20) warmup, an EMA over per-class gradient statistics shapes split selection, and leaf magnitudes are scaled by a depth penalty and per-iteration shrinkage. See the MorphBoost paper for the formulation.

from alloygbm import GBMRegressor

# Constant LR (default) with morph adaptive split criterion
model = GBMRegressor(
    n_estimators=1200,
    max_depth=6,
    learning_rate=0.05,
    training_mode="morph",      # opt in
    morph_rate=0.1,             # per-round leaf shrinkage
    info_score_weight=0.3,      # blend weight for info-theoretic term
    depth_penalty_base=0.9,     # multiplier per depth level
    balance_penalty=True,       # penalize highly imbalanced splits
    seed=7,
)
model.fit(X_train, y_train)

# With warmup-cosine LR schedule (good fit for very-low-LR runs)
model = GBMRegressor(
    n_estimators=5000,
    learning_rate=0.01,
    training_mode="morph",
    lr_schedule="warmup_cosine",
    lr_warmup_frac=0.1,         # fraction of n_estimators spent in warmup
    seed=7,
)

training_mode="morph" works with GBMClassifier and GBMRanker too, with identical parameter semantics.

Piecewise-Linear Leaves

Set leaf_model="linear" on any estimator to replace scalar leaves with small closed-form linear models (f_s(x) = b_s + Σ α_j x_j). Weights are solved via ridge regression α* = -(XᵀHX + λI)⁻¹ Xᵀg regularised by lambda_l2. This typically converges in fewer rounds on data with linear within-node residual structure (e.g. California Housing), at a 2–8× per-round training overhead.

from alloygbm import GBMRegressor

model = GBMRegressor(
    n_estimators=300,
    max_depth=6,
    learning_rate=0.05,
    leaf_model="linear",
    lambda_l2=0.01,    # recommended >= 0.01 with linear leaves
    seed=7,
)
model.fit(X_train, y_train)

leaf_model="linear" works with GBMClassifier and GBMRanker, and composes with training_mode="morph". SHAP currently requires leaf_model="constant".

Time-Aware Validation

from alloygbm import GBMRegressor, purged_time_series_splits, rmse

splits = purged_time_series_splits(time_index, n_splits=5, purge_gap=1, embargo=0)

for train_idx, test_idx in splits:
    model = GBMRegressor(deterministic=True, seed=7)
    model.fit(
        [rows[i] for i in train_idx],
        [targets[i] for i in train_idx],
    )
    score = rmse(
        [targets[i] for i in test_idx],
        model.predict([rows[i] for i in test_idx]),
    )

For panel data, use purged_panel_splits(...).

Model Persistence

import pickle

# Pickle round-trip
with open("model.pkl", "wb") as f:
    pickle.dump(model, f)
with open("model.pkl", "rb") as f:
    model = pickle.load(f)

# Native save/load
model.save_model("model.agbm")
loaded = GBMRegressor.load_model("model.agbm")

# Artifact export for deployment
artifact_bytes = model.artifact_bytes

Feature Summary

Estimators

  • GBMRegressor -- squared-error regression with dataset-aware training_policy
  • GBMClassifier -- binary classification with log-loss objective, predict_proba, sklearn ClassifierMixin
  • GBMRanker -- learning-to-rank with 5 objectives: rank:pairwise, rank:ndcg, rank:xendcg, queryrmse, yetirank
  • All estimators are sklearn-compatible (get_params, set_params, score, pipeline integration)

Training Features

  • NaN/missing value support with learned split direction
  • Sample weights via fit(..., sample_weight=...)
  • Monotone constraints via monotone_constraints
  • Feature importance weighting via feature_weights
  • Leaf-wise (best-first) tree growth via tree_growth="leaf"
  • Warm-starting / incremental training via warm_start=True
  • Up to 65,535 bins per feature (continuous_binning_max_bins)
  • Multiple categorical column support via categorical_feature_indices
  • Early stopping with best_iteration_, best_score_, evals_result_
  • Objective-aware training metric tracking (RMSE, log-loss, accuracy, NDCG)
  • Adaptive split criterion via training_mode="morph" (MorphBoost)
  • Per-iteration learning-rate schedules: lr_schedule="constant" (default) or "warmup_cosine"
  • Piecewise-linear leaves via leaf_model="linear" (closed-form ridge solve, faster convergence on linear-trend data)

Inference and Explanations

  • Zero-copy numpy prediction from native artifacts
  • TreeSHAP explanations via shap_values(...) (polynomial-time, no feature limit)
  • Global feature importance via feature_importances(...)
  • Artifact-backed prediction via predict_from_artifact(...)

Validation Helpers

  • purged_time_series_splits(...) -- leakage-aware time-series cross-validation
  • purged_panel_splits(...) -- panel-data cross-validation

Metrics

  • Regression: rmse, mae, r2_score
  • Classification: accuracy, log_loss
  • Ranking: ndcg
  • Finance: pearson_correlation, rank_ic, hit_rate, icir

Benchmark Snapshot

The benchmark suite compares AlloyGBM against XGBoost, LightGBM, and CatBoost across regression, classification, and ranking tasks.

Regression:

  • AlloyGBM is strongest on panel_time_series
  • AlloyGBM is strong on dow_jones_financial
  • AlloyGBM is competitive on dense_numeric, trails on california_housing and bike_sharing

Classification:

  • AlloyGBM is competitive with established libraries on breast_cancer and synthetic_classification

Ranking:

  • AlloyGBM competes on synthetic_ranking using its native LambdaMART implementation

Benchmark tooling and methodology live in benchmarks/README.md.

Current Limitations

  • CPU-only runtime (GPU backend is architecturally planned but not implemented)
  • No interaction constraints
  • No dart/goss boosting modes
  • SHAP not yet supported with leaf_model="linear" (use "constant" for now)

Documentation

License

MIT. See LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

alloygbm-0.5.1.tar.gz (263.0 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

alloygbm-0.5.1-cp311-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.0 MB view details)

Uploaded CPython 3.11+manylinux: glibc 2.17+ x86-64

alloygbm-0.5.1-cp311-abi3-macosx_11_0_arm64.whl (899.6 kB view details)

Uploaded CPython 3.11+macOS 11.0+ ARM64

File details

Details for the file alloygbm-0.5.1.tar.gz.

File metadata

  • Download URL: alloygbm-0.5.1.tar.gz
  • Upload date:
  • Size: 263.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for alloygbm-0.5.1.tar.gz
Algorithm Hash digest
SHA256 94be2160d48eea0f3c18686470d264a960c6b9850799bb410d94cb4eb31f99c1
MD5 ca3c9e3ac9333f611c06e899eda54e4e
BLAKE2b-256 2b926b4d34895c03dd5de9b0e0ccab902fd626b19ecc4a6f50e0311d1d160c55

See more details on using hashes here.

Provenance

The following attestation bundles were made for alloygbm-0.5.1.tar.gz:

Publisher: publish.yml on LGA-Personal/AlloyGBM

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file alloygbm-0.5.1-cp311-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for alloygbm-0.5.1-cp311-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 90115f7189e206a629f4783036bc7bec4eb498a03e6b1988ce3e48143ec7032d
MD5 c2150e7cb3202ae33b387ed54e6c37de
BLAKE2b-256 d67790ccd4792daca29bd95cd5fb5291f8b5d8d4543fd311f799737bf77f449e

See more details on using hashes here.

Provenance

The following attestation bundles were made for alloygbm-0.5.1-cp311-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: publish.yml on LGA-Personal/AlloyGBM

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file alloygbm-0.5.1-cp311-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for alloygbm-0.5.1-cp311-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 53e43157048938dcd2996449db38312cacbce60d0d652a7d70c4e2e42ea26107
MD5 23f919dceaf75d158026c8186fec71c2
BLAKE2b-256 28656d7e4114c51795305dd1df8fee3b56ee60072eacb48eb341ae1bc61dd6f8

See more details on using hashes here.

Provenance

The following attestation bundles were made for alloygbm-0.5.1-cp311-abi3-macosx_11_0_arm64.whl:

Publisher: publish.yml on LGA-Personal/AlloyGBM

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page