Skip to main content

Config-driven ML analysis library for regression and classification

Project description

LizyML

CI Python 3.10+ License: MIT

Config-driven ML library that unifies tune / fit / predict / evaluate / export for regression, binary classification, and multiclass classification.

Key Features

  • One config, full pipeline -- A single dict/YAML/JSON drives splitting, training, tuning, evaluation, and export. No boilerplate orchestration code.
  • Reproducibility by default -- Seed, split indices, params, library versions, and data fingerprint are captured automatically in every run.
  • Leakage-aware CV and calibration -- OOF predictions never see their own training rows. Calibration uses cross-fit on the same outer splits. Time and group constraints propagate to inner validation.
  • 8 CV strategies -- KFold, Stratified, Group, StratifiedGroup, TimeSeries, Purged TimeSeries, Group TimeSeries, and 2-axis Blocked Group KFold.
  • Stable result contracts -- FitResult, PredictionResult, and artifact formats have fixed schemas. Downstream code never breaks on shape changes.
  • Codegen export -- Generate standalone train.py + predict.py that run without LizyML installed.
  • Optional extras -- Tuning (Optuna), SHAP explanations, Plotly visualizations, and Beta calibration (scipy) are all opt-in.

Installation

pip install lizyml

Extras

pip install 'lizyml[tuning]'        # Optuna hyperparameter search
pip install 'lizyml[explain]'        # SHAP explanations
pip install 'lizyml[plots]'          # Plotly visualizations
pip install 'lizyml[calibration]'    # Beta calibrator (scipy)
pip install 'lizyml[tuning,explain,plots,calibration]'  # all extras

Development install

git clone https://github.com/nbx-liz/LizyML.git
cd LizyML
uv sync --group dev

Quick Start

import numpy as np
import pandas as pd
from lizyml import Model

# --- Synthetic data ---
rng = np.random.default_rng(42)
n = 500
df = pd.DataFrame({
    "feat_a": rng.normal(size=n),
    "feat_b": rng.normal(size=n),
    "cat_col": rng.choice(["x", "y", "z"], size=n),
    "target": rng.normal(size=n),
})

# --- Config ---
config = {
    "config_version": 1,
    "task": "regression",
    "data": {"target": "target"},
    "features": {"categorical": ["cat_col"]},
    "split": {"method": "kfold", "n_splits": 5},
    "model": {"name": "lgbm"},
    "evaluation": {"metrics": ["rmse", "mae"]},
}

# --- Train, evaluate, predict ---
model = Model(config=config)
fit_result = model.fit(data=df)
metrics = model.evaluate()
print(metrics)  # {"raw": {"oof": {"rmse": ..., "mae": ...}, ...}}

pred = model.predict(df.drop(columns=["target"]))
print(pred.pred[:5])

# --- Save and reload ---
model.export("my_model")
loaded = Model.load("my_model")
loaded.predict(df.drop(columns=["target"]))

Configuration

LizyML accepts configs as Python dicts, JSON files, or YAML files. Environment variables override any key using the LIZYML__ prefix (e.g., LIZYML__training__seed=99).

See Config Reference for all keys, defaults, split method guides, and tuning space definitions.

Codegen Export

Generate LizyML-free scripts for production deployment:

model.export_code("deploy/my_model")

Output:

  • train.py -- retrain on new data with python train.py data.csv
  • predict.py -- run inference with python predict.py test.csv -o out.csv
  • config.json -- all hyperparameters and feature definitions
  • test_equivalence.py -- verify codegen matches Model.predict()
  • artifacts/ -- model files in human-readable formats

Dependencies: only lightgbm, numpy, pandas, scikit-learn.

Architecture

LizyML uses a 5-layer architecture where dependencies flow strictly downward:

Layer 4  Facade       Model (orchestration only, no logic)
           |
Layer 3  Optional     explain / plots / persistence / codegen
           |
Layer 2  Composition  training / evaluation / tuning
           |
Layer 1  Leaf         config / data / splitters / features / estimators / metrics / calibration
           |
Layer 0  Foundation   types (FitResult, PredictionResult, ...) / exceptions / logging

Key rules:

  • Downward-only dependencies (no circular imports)
  • Layer 2 references Layer 1 through abstract interfaces only
  • Only the Facade (Layer 4) assembles concrete classes

See ARCHITECTURE.md for full diagrams and module layout.

Design Priorities

Reproducibility -- Same config + seed = same splits, same OOF predictions, same metrics. Every run captures seed, split indices, params, library versions, and a data fingerprint.

Leakage prevention -- OOF rows are never seen during training. Calibration cross-fit reuses outer CV splits. Time and group constraints propagate to inner validation (early stopping) and calibration.

Contract stability -- FitResult, PredictionResult, and artifact formats have fixed schemas. Breaking changes require a format_version bump and migration path.

Result Objects

Object Key fields
FitResult oof_pred, if_pred_per_fold, metrics, models, splits, run_meta
PredictionResult pred, proba (binary), shap_values (optional), warnings
Model Artifact Trained models, pipeline state, calibrator, config, format_version

model.evaluate() returns structured metrics:

{
    "raw": {
        "oof": {"rmse": 0.42, "mae": 0.33},
        "if_mean": {"rmse": 0.40, "mae": 0.31},
        "if_per_fold": [...],
        "oof_coverage": 1.0,
    },
    "calibrated": {"oof": {"logloss": 0.35}},  # binary only
}

See BLUEPRINT.md for full schemas and invariants.

Roadmap

  • Broader scikit-learn estimator support
  • DNN backend (PyTorch)
  • Multiclass calibration
  • Ranking tasks
  • Additional export formats (ONNX, TorchScript)

Documentation

Contributing

  1. Fork the repo and create a branch from develop
  2. Run quality gates: uv run ruff check . && uv run mypy lizyml/ && uv run pytest
  3. Open a PR against develop

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lizyml-0.8.0.tar.gz (589.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lizyml-0.8.0-py3-none-any.whl (145.9 kB view details)

Uploaded Python 3

File details

Details for the file lizyml-0.8.0.tar.gz.

File metadata

  • Download URL: lizyml-0.8.0.tar.gz
  • Upload date:
  • Size: 589.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for lizyml-0.8.0.tar.gz
Algorithm Hash digest
SHA256 a2b9569c109eee948e91b1b3a0a3212dbdf26da5360927e28c6fbea32740b30c
MD5 247c2c461fef99282f472844f3f3b591
BLAKE2b-256 febf4fa1c7c0bc1bec8dfee15f72b142b34f7e1fbe138d5992b207f40f72eddd

See more details on using hashes here.

Provenance

The following attestation bundles were made for lizyml-0.8.0.tar.gz:

Publisher: release.yml on nbx-liz/LizyML

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file lizyml-0.8.0-py3-none-any.whl.

File metadata

  • Download URL: lizyml-0.8.0-py3-none-any.whl
  • Upload date:
  • Size: 145.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for lizyml-0.8.0-py3-none-any.whl
Algorithm Hash digest
SHA256 151c9c096d55dafc4e9c79f1632efec03d6e08ddfeeb0426f9b7bf29e2b60668
MD5 4b15665d2ab1359ddd7f29d868f81df0
BLAKE2b-256 620b65e037b47e1d9c65ba16220ab79de289dc10c68e7cd691b4891e87380dd1

See more details on using hashes here.

Provenance

The following attestation bundles were made for lizyml-0.8.0-py3-none-any.whl:

Publisher: release.yml on nbx-liz/LizyML

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page