Monotonic composite quantile gradient boost regressor
Project description
MQBoost introduces an advanced model for estimating multiple quantiles while ensuring the non-crossing condition (monotone quantile condition). This model harnesses the capabilities of both LightGBM and XGBoost, two leading gradient boosting frameworks.
By implementing the hyperparameter optimization prowess of Optuna, this model achieves great performance and precision. Optuna's optimization algorithms fine-tune the hyperparameters, ensuring the model operates efficiently.
Installation
Install using pip:
pip install mqboost
Usage
Features
- MQRegressor: quantile regressor
Parameters
x # Explanatory data (e.g. pd.DataFrame)
# Column name '_tau' must be not included
y # Response data (e.g. np.ndarray)
alphas # Target quantiles
# It must be in ascending order and not contain duplicates
objective # [Optional] objective to minimize, "check"(default) or "huber"
model # [Optional] boost algorithm to use, "lightgbm"(default) or "xgboost"
delta # [Optional] parameter in "huber" objective, only used when objective == "huber"
# It must be smaller than 0.1
Methods
train # train quantile model
# Any params related to model can be used except "objective"
predict # predict with input data
optimize_params # Optimize hyperparameter with using optuna
Example
import numpy as np
from mqboost import MQRegressor
# Generate sample data
sample_size = 500
x = np.linspace(-10, 10, sample_size)
y = np.sin(x) + np.random.uniform(-0.4, 0.4, sample_size)
x_test = np.linspace(-10, 10, sample_size)
y_test = np.sin(x_test) + np.random.uniform(-0.4, 0.4, sample_size)
# Define target quantiles
alphas = [0.3, 0.4, 0.5, 0.6, 0.7]
# Specify model type
model = "lightgbm" # Options: "lightgbm" or "xgboost"
# Set objective function
objective = "huber" # Options: "huber" or "check"
delta = 0.01 # Set when objective is "huber", default is 0.05
# Initialize the LightGBM-based quantile regressor
mq_lgb = MQRegressor(
x=x,
y=y_test,
alphas=alphas,
objective=objective,
model=model,
delta=delta,
)
# Train the model with fixed parameters
lgb_params = {
"max_depth": 4,
"num_leaves": 15,
"learning_rate": 0.1,
"boosting_type": "gbdt",
}
mq_lgb.train(params=lgb_params)
# Train the model with Optuna hyperparameter optimization
mq_lgb.train(n_trials=10)
# Alternatively, you can optimize parameters first and then train
# best_params = mq_lgb.optimize_params(n_trials=10)
# mq_lgb.train(params=best_params)
# Predict using the trained model
preds_lgb = mq_lgb.predict(x=x_test, alphas=alphas)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file mqboost-0.1.1.tar.gz
.
File metadata
- Download URL: mqboost-0.1.1.tar.gz
- Upload date:
- Size: 8.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.12.1 Linux/6.5.0-1023-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 587f621b41bee5aa2d766508c23f5680a98ee59922bad0bcc37de6df261762f1 |
|
MD5 | bd31a7186b553bf7b76542e835ce75a8 |
|
BLAKE2b-256 | b5daed106888313e7aca703cecb09799ea7e0b3bbe9dae59f4bd5a3fc847d503 |
File details
Details for the file mqboost-0.1.1-py3-none-any.whl
.
File metadata
- Download URL: mqboost-0.1.1-py3-none-any.whl
- Upload date:
- Size: 9.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.12.1 Linux/6.5.0-1023-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2f5512e74c6242a0b2a49388eb037581d4856bb0cea33f6e25b64673565dda24 |
|
MD5 | bc0b04c96ed527b6b906e87286e39f45 |
|
BLAKE2b-256 | 74cebc01969f6861b9a96fc271dae3895de3e8d887addcac9f3139d4d3d0486c |