LightGBM/XGBoost interface which tunes n_estimator by splitting data, then refit with entire data
Project description
gbm_autosplit
GBM scikit-learn interfaces which performs "early stopping" with single data set during fit
.
"Early stopping" is great practice to tune the number of estimators for gradient boosting models. However it is not difficult to use it in tuning module in scikit-learn such as RandomizedSearchCV / GridSearchCV because to use early stopping module requires two data sets but scikit learn does not have such interface.
To solve this situation, this interface performs following steps with in fit
.
- Split original input data into two randomly
- Estimate
n_estimators
by using split data set with early stopping - Perform
fit
by using entire data set with estimatedn_estimators
Install
pip install gbm_autosplit
Usage
import gbm_autosplit
estimator = gbm_autosplit.LGBMClassifier()
estimator.fit(x, y)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
gbm_autosplit-0.0.5.tar.gz
(4.8 kB
view hashes)
Built Distribution
Close
Hashes for gbm_autosplit-0.0.5-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | fff1e318deb47f925623e092c806022faabc2b8af7f4c047d85d85f7439d8854 |
|
MD5 | a51c9743c6b45f6f4ee2cbd835821f4a |
|
BLAKE2b-256 | 217ee7a4f43528b9799af702d890df80552b2b3c635d8bbafa0b6f314beb8ca0 |