Skip to main content

Version and deploy your models following GitOps principles

Project description

image

Check, test and release codecov PyPi License: Apache 2.0

MLEM helps you package and deploy machine learning models. It saves ML models in a standard format that can be used in a variety of production scenarios such as real-time REST serving or batch processing.

  • Run your ML models anywhere: Wrap models as a Python package or Docker Image, or deploy them to Heroku (SageMaker, Kubernetes, and more platforms coming soon). Switch between platforms transparently, with a single command.

  • Model metadata into YAML automatically: Automatically include Python requirements and input data needs into a human-readable, deployment-ready format. Use the same metafile on any ML framework.

  • Stick to your training workflow: MLEM doesn't ask you to rewrite model training code. Add just two lines around your Python code: one to import the library and one to save the model.

  • Developer-first experience: Use the CLI when you feel like DevOps, or the API if you feel like a developer.

Why is MLEM special?

The main reason to use MLEM instead of other tools is to adopt a GitOps approach to manage model lifecycles.

  • Git as a single source of truth: MLEM writes model metadata to a plain text file that can be versioned in Git along with code. This enables GitFlow and other software engineering best practices.

  • Unify model and software deployment: Release models using the same processes used for software updates (branching, pull requests, etc.).

  • Reuse existing Git infrastructure: Use familiar hosting like Github or Gitlab for model management, instead of having separate services.

  • UNIX philosophy: MLEM is a modular tool that solves one problem very well. It integrates well into a larger toolset from Iterative.ai, such as DVC and CML.

Usage

This a quick walkthrough showcasing deployment and export functionality of MLEM.

Please read Get Started guide for a full version.

Installation

MLEM requires Python 3.

$ python -m pip install mlem

To install the pre-release version:

$ python -m pip install git+https://github.com/iterative/mlem

Saving the model

# train.py
from mlem.api import save
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_iris

def main():
    data, y = load_iris(return_X_y=True, as_frame=True)
    rf = RandomForestClassifier(
        n_jobs=2,
        random_state=42,
    )
    rf.fit(data, y)

    save(
        rf,
        "rf",
        sample_data=data,
        labels=["random-forest", "classifier"],
        description="Random Forest Classifier",
    )

if __name__ == "__main__":
    main()

Check out what we have:

$ ls
rf
rf.mlem
$ cat rf.mlem
Click to show `cat` output
artifacts:
  data:
    hash: ea4f1bf769414fdacc2075ef9de73be5
    size: 163651
    uri: rf
description: Random Forest Classifier
labels:
- random-forest
- classifier
model_type:
  methods:
    predict:
      args:
      - name: data
        type_:
          columns:
          - sepal length (cm)
          - sepal width (cm)
          - petal length (cm)
          - petal width (cm)
          dtypes:
          - float64
          - float64
          - float64
          - float64
          index_cols: []
          type: dataframe
      name: predict
      returns:
        dtype: int64
        shape:
        - null
        type: ndarray
    predict_proba:
      args:
      - name: data
        type_:
          columns:
          - sepal length (cm)
          - sepal width (cm)
          - petal length (cm)
          - petal width (cm)
          dtypes:
          - float64
          - float64
          - float64
          - float64
          index_cols: []
          type: dataframe
      name: predict_proba
      returns:
        dtype: float64
        shape:
        - null
        - 3
        type: ndarray
    sklearn_predict:
      args:
      - name: X
        type_:
          columns:
          - sepal length (cm)
          - sepal width (cm)
          - petal length (cm)
          - petal width (cm)
          dtypes:
          - float64
          - float64
          - float64
          - float64
          index_cols: []
          type: dataframe
      name: predict
      returns:
        dtype: int64
        shape:
        - null
        type: ndarray
    sklearn_predict_proba:
      args:
      - name: X
        type_:
          columns:
          - sepal length (cm)
          - sepal width (cm)
          - petal length (cm)
          - petal width (cm)
          dtypes:
          - float64
          - float64
          - float64
          - float64
          index_cols: []
          type: dataframe
      name: predict_proba
      returns:
        dtype: float64
        shape:
        - null
        - 3
        type: ndarray
  type: sklearn
object_type: model
requirements:
- module: sklearn
  version: 1.0.2
- module: pandas
  version: 1.4.1
- module: numpy
  version: 1.22.3

Deploying the model

If you want to follow this Quick Start, you'll need to sign up on https://heroku.com, create an API_KEY and populate HEROKU_API_KEY env var.

First, create an environment to deploy your model:

$ mlem declare env heroku staging
💾 Saving env to staging.mlem

Now we can deploy the model with mlem deploy (you need to use different app_name, since it's going to be published on https://herokuapp.com):

$ mlem deployment run mydeploy -m rf -t staging -c app_name=mlem-quick-start
⏳️ Loading deployment from .mlem/deployment/myservice.mlem
🔗 Loading link to .mlem/env/staging.mlem
🔗 Loading link to .mlem/model/rf.mlem
💾 Updating deployment at .mlem/deployment/myservice.mlem
🏛 Creating Heroku App example-mlem-get-started
💾 Updating deployment at .mlem/deployment/myservice.mlem
🛠 Creating docker image for heroku
  💼 Adding model files...
  🛠 Generating dockerfile...
  💼 Adding sources...
  💼 Generating requirements file...
  🛠 Building docker image registry.heroku.com/example-mlem-get-started/web...
    Built docker image registry.heroku.com/example-mlem-get-started/web
  🔼 Pushed image registry.heroku.com/example-mlem-get-started/web to remote registry at host registry.heroku.com
💾 Updating deployment at .mlem/deployment/myservice.mlem
🛠 Releasing app my-mlem-service formation
💾 Updating deployment at .mlem/deployment/myservice.mlem
✅  Service example-mlem-get-started is up. You can check it out at https://mlem-quick-start.herokuapp.com/

Exporting the model

You could easily export the model to a different format using mlem build:

$ mlem build rf docker -c server.type=fastapi -c image.name=sklearn-model
⏳️ Loading model from rf.mlem
🛠 Building MLEM wheel file...
💼 Adding model files...
🛠 Generating dockerfile...
💼 Adding sources...
💼 Generating requirements file...
🛠 Building docker image sklearn-model:latest...
✅  Built docker image sklearn-model:latest

Contributing

Contributions are welcome! Please see our Contributing Guide for more details. Thanks to all our contributors!

Copyright

This project is distributed under the Apache license version 2.0 (see the LICENSE file in the project root).

By submitting a pull request to this project, you agree to license your contribution under the Apache license version 2.0 to this project.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mlem-0.2.5.tar.gz (187.3 kB view details)

Uploaded Source

Built Distribution

mlem-0.2.5-py3-none-any.whl (152.3 kB view details)

Uploaded Python 3

File details

Details for the file mlem-0.2.5.tar.gz.

File metadata

  • Download URL: mlem-0.2.5.tar.gz
  • Upload date:
  • Size: 187.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.5

File hashes

Hashes for mlem-0.2.5.tar.gz
Algorithm Hash digest
SHA256 1a85156c0c486db7c57e2cf2338caa34e4852c403ec94be06d2022e92c2f0efc
MD5 dd984d5486ea777b844edb3b19b71f20
BLAKE2b-256 c25924c5f00c8bd8ae8a100424b10ce36e2ac6663607ecc420ea143b07316982

See more details on using hashes here.

File details

Details for the file mlem-0.2.5-py3-none-any.whl.

File metadata

  • Download URL: mlem-0.2.5-py3-none-any.whl
  • Upload date:
  • Size: 152.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.5

File hashes

Hashes for mlem-0.2.5-py3-none-any.whl
Algorithm Hash digest
SHA256 9b578375a020588c418066388488635d67516d5bf1a3a95f4b7298af155bebdc
MD5 11bbc954cb2e5e6dcd507e8f6a610d4c
BLAKE2b-256 485921358aee04efd17c6d974ce0c388d4962761031afcc16015a71322e28296

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page