A python package for efficient pickling of ML models.
Project description
Slim Trees
slim-trees
is a Python package for saving and loading compressed sklearn
Tree-based and lightgbm
models.
The compression is performed by modifying how the model is pickled by Python's pickle
module.
We presented this library at PyData Berlin 2023, check out the slides!
Installation
pip install slim-trees
# or
micromamba install slim-trees -c conda-forge
# or
pixi add slim-trees
Usage
Using slim-trees
does not affect your training pipeline.
Simply call dump_sklearn_compressed
or dump_lgbm_compressed
to save your model.
[!WARNING]
slim-trees
does not save all the data that would be saved bysklearn
: only the parameters that are relevant for inference are saved. If you want to save the full model includingimpurity
etc. for analytic purposes, we suggest saving both the original usingpickle.dump
for analytics and the slimmed down version usingslim-trees
for production.
Example for a RandomForestClassifier
:
# example, you can also use other Tree-based models
from sklearn.ensemble import RandomForestClassifier
from slim_trees import dump_sklearn_compressed
# load training data
X, y = ...
model = RandomForestClassifier()
model.fit(X, y)
dump_sklearn_compressed(model, "model.pkl")
# or alternatively with compression
dump_sklearn_compressed(model, "model.pkl.lzma")
Example for a LGBMRegressor
:
from lightgbm import LGBMRegressor
from slim_trees import dump_lgbm_compressed
# load training data
X, y = ...
model = LGBMRegressor()
model.fit(X, y)
dump_lgbm_compressed(model, "model.pkl")
# or alternatively with compression
dump_lgbm_compressed(model, "model.pkl.lzma")
Later, you can load the model using load_compressed
or pickle.load
.
import pickle
from slim_trees import load_compressed
model = load_compressed("model.pkl")
# or alternatively with pickle.load
with open("model.pkl", "rb") as f:
model = pickle.load(f)
Save your model as bytes
You can also save the model as bytes
instead of in a file similar to the pickle.dumps
method.
from slim_trees import dumps_sklearn_compressed, loads_compressed
X, y = ...
model = RandomForestClassifier()
model.fit(X, y)
data = dumps_sklearn_compressed(model, compression="lzma")
...
model_loaded = loads_compressed(data, compression="lzma")
Drop-in replacement for pickle
You can also use the slim_trees.sklearn_tree.dump
or slim_trees.lgbm_booster.dump
functions as drop-in replacements for pickle.dump
.
from slim_trees import sklearn_tree, lgbm_booster
# for sklearn models
with open("model.pkl", "wb") as f:
sklearn_tree.dump(model, f) # instead of pickle.dump(...)
# for lightgbm models
with open("model.pkl", "wb") as f:
lgbm_booster.dump(model, f) # instead of pickle.dump(...)
Development Installation
You can install the package in development mode using the new conda package manager pixi
:
❯ git clone https://github.com/quantco/slim-trees.git
❯ cd slim-trees
❯ pixi install
❯ pixi run postinstall
❯ pixi run test
[...]
❯ pixi run py312 python
>>> import slim_trees
[...]
Benchmark
As a general overview on what you can expect in terms of savings:
This is a 1.2G large sklearn RandomForestRegressor
.
The new file is 9x smaller than the original pickle file.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for slim_trees-0.2.11-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 48cea300fc0f76cc8bea305e1a8942544ec9e3a2a71145c7f9bf653d5f86435d |
|
MD5 | 2aca684f46b0d579a78cbb0dfabfab0a |
|
BLAKE2b-256 | 75a79a45a20486c5b7b4895bf16755a123916514bcc704e7a40e59403bfb21ad |