Skip to main content

MLflavors: A collection of custom MLflow flavors.

Project description

The MLflavors package adds MLflow support for some popular machine learning frameworks currently not considered for inclusion as MLflow built-in flavors. Just like built-in flavors, you can use this package to save your model as an MLflow artifact, load your model from MLflow for batch inference, and deploy your model to a serving endpoint using MLflow deployment tools.

The following open-source libraries are currently supported:

Framework

Tutorials

Category

Orbit

MLflow-Orbit

Time Series Forecasting

Sktime

MLflow-Sktime

Time Series Forecasting

StatsForecast

MLflow-StatsForecast

Time Series Forecasting

PyOD

MLflow-PyOD

Anomaly Detection

SDV

MLflow-SDV

Synthetic Data Generation

The MLflow interface for the supported frameworks closely follows the design of built-in flavors. Particularly, the interface for utilizing the custom model loaded as a pyfunc flavor for generating predictions uses a single-row Pandas DataFrame configuration argument to expose the parameters of the flavor’s inference API.

tests coverage Latest Docs Latest Python Release BSD-3-Clause License

Documentation

Usage examples for all flavors and the API reference can be found in the package documenation.

Installation

Installing from PyPI:

$ pip install mlflavors

Quickstart

This example trains a PyOD KNN model using a synthetic dataset. Normal data is generated by a multivariate Gaussian distribution and outliers are generated by a uniform distribution. A new MLflow experiment is created to log the evaluation metrics and the trained model as an artifact and anomaly scores are computed loading the trained model in native flavor and pyfunc flavor:

import json

import mlflow
import pandas as pd
from pyod.models.knn import KNN
from pyod.utils.data import generate_data
from sklearn.metrics import roc_auc_score

import mlflavors

ARTIFACT_PATH = "model"

with mlflow.start_run() as run:
    contamination = 0.1  # percentage of outliers
    n_train = 200  # number of training points
    n_test = 100  # number of testing points

    X_train, X_test, _, y_test = generate_data(
        n_train=n_train, n_test=n_test, contamination=contamination
    )

    # Train kNN detector
    clf = KNN()
    clf.fit(X_train)

    # Evaluate model
    y_test_scores = clf.decision_function(X_test)

    metrics = {
        "roc": roc_auc_score(y_test, y_test_scores),
    }

    print(f"Metrics: \n{json.dumps(metrics, indent=2)}")

    # Log metrics
    mlflow.log_metrics(metrics)

    # Log model using pickle serialization (default).
    mlflavors.pyod.log_model(
        pyod_model=clf,
        artifact_path=ARTIFACT_PATH,
        serialization_format="pickle",
    )
    model_uri = mlflow.get_artifact_uri(ARTIFACT_PATH)

# Print the run id wich is used below for serving the model to a local REST API endpoint
print(f"\nMLflow run id:\n{run.info.run_id}")

Make a prediction loading the model from MLflow in native format:

loaded_model = mlflavors.pyod.load_model(model_uri=model_uri)
print(loaded_model.decision_function(X_test))

Make a prediction loading the model from MLflow in pyfunc format:

loaded_pyfunc = mlflavors.pyod.pyfunc.load_model(model_uri=model_uri)

# Create configuration DataFrame
predict_conf = pd.DataFrame(
    [
        {
            "X": X_test,
            "predict_method": "decision_function",
        }
    ]
)

print(loaded_pyfunc.predict(predict_conf)[0])

To serve the model to a local REST API endpoint run the command below where you substitute the run id printed above:

mlflow models serve -m runs:/<run_id>/model --env-manager local --host 127.0.0.1

Open a new terminal and run the below model scoring script to request a prediction from the served model:

import pandas as pd
import requests
from pyod.utils.data import generate_data

contamination = 0.1  # percentage of outliers
n_train = 200  # number of training points
n_test = 100  # number of testing points

_, X_test, _, _ = generate_data(
    n_train=n_train, n_test=n_test, contamination=contamination
)

# Define local host and endpoint url
host = "127.0.0.1"
url = f"http://{host}:5000/invocations"

# Convert to list for JSON serialization
X_test_list = X_test.tolist()

# Create configuration DataFrame
predict_conf = pd.DataFrame(
    [
        {
            "X": X_test_list,
            "predict_method": "decision_function",
        }
    ]
)

# Create dictionary with pandas DataFrame in the split orientation
json_data = {"dataframe_split": predict_conf.to_dict(orient="split")}

# Score model
response = requests.post(url, json=json_data)
print(response.json())

Contributing

Contributions from the community are welcome, I will be happy to support the inclusion and development of new features and flavors. To open an issue or request a new feature, please open a GitHub issue.

Versioning

Versions and changes are documented in the changelog .

Development

To set up your local development environment, create a virtual environment, such as:

$ conda create -n mlflavors-dev python=3.9
$ source activate mlflavors-dev

Install project locally:

$ python -m pip install --upgrade pip
$ pip install -e ".[dev]"

Install pre-commit hooks:

$ pre-commit install

Run tests:

$ pytest

Build Sphinx docs:

$ cd docs
$ make html

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mlflavors-0.1.0.tar.gz (25.3 kB view details)

Uploaded Source

Built Distribution

mlflavors-0.1.0-py3-none-any.whl (32.9 kB view details)

Uploaded Python 3

File details

Details for the file mlflavors-0.1.0.tar.gz.

File metadata

  • Download URL: mlflavors-0.1.0.tar.gz
  • Upload date:
  • Size: 25.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.11.3

File hashes

Hashes for mlflavors-0.1.0.tar.gz
Algorithm Hash digest
SHA256 04560518e89bbd083506b72a77d01bccc124ccabed88970a02f19d6467860a9d
MD5 5184225676f5fe5407f503d310fa098a
BLAKE2b-256 71f57a5d157d848f3e9b6c3a4e6bc05e1ff632c617fc329d26fc9b7737e73c5e

See more details on using hashes here.

File details

Details for the file mlflavors-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: mlflavors-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 32.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.11.3

File hashes

Hashes for mlflavors-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ad08d9ea692830db657b5e03ca280a3c46a4a1d88a5127913128290b6b3f7a1d
MD5 6150c6e1ac64498333a05e8d258383e0
BLAKE2b-256 d356a89ceb7211ba9a955ee38ce523333d8b985bf9623e876c2600b65d835887

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page