mlctl is the control plane for MLOps. It provides a CLI and a Python SDK for supporting key operations related to MLOps.
Project description
mlctl
mlctl
is the Command Line Interface (CLI)/Software Development Kit (SDK) for MLOps. It allows for all ML Lifecycle operations, such as Training
, Deployment
etc. to be controlled via a simple-to-use command line interface. Additionally, mlctl
provides a SDK for use in a notebook environment and employs an extensible mechanism for plugging in various back-end providers, such as SageMaker.
The following ML Lifecycle operations are currently supported via mlctl
train
- operations related to model traininghost
- operations related to hosting a model for online inferencebatch inference
- operations for running model inference in a batch method
Getting Started
Installation
-
(Optional) Create a new virtual environment for
mlctl
pip install virtualenv virtualenv ~/envs/mlctl source ~/envs/mlctl/bin/activate
-
Install
mlctl
:pip install mlctl
-
Upgrade an existing version:
pip install --upgrade mlctl
Usage
Optional Setup
mlctl
requires users to specify the plugin and a profile/credentials file for authenticating operations. These values can either be stored as environment variables as shown below OR they can be passed as command line options. Use --help
for more details.
```
export PLUGIN=
export PROFILE=
```
Commands
mlctl
CLI commands have the following structure:
mlctl <command> <subcommand> [OPTIONS]
To view help documentation, run the following:
mlctl --help
mlctl <command> --help
mlctl <command> <subcommand> --help
Initialize ML Model
mlctl init [OPTIONS]
Options | Description |
---|---|
template or -t | (optional) Location of the project template github location. |
Training Commands
mlctl train <subcommand> [OPTIONS]
Subcommand | Description |
---|---|
start | train a model |
stop | stop an ongoing training job |
info | get training job information |
Hosting Commands
mlctl hosting <subcommand> [OPTIONS]
Subcommand | Description |
---|---|
create | create a model from trained model artifact |
deploy | deploy a model to create an endpoint for inference |
undeploy | undeploy a model |
info | get endpoint information |
Batch Inference Commands
mlctl batch <subcommand> [OPTIONS]
Subcommand | Description |
---|---|
start | perform batch inference |
stop | stop an ongoing batch inference |
info | get batch inference information |
Examples
Contributing
For information on how to contribute to mlctl
, please read through the contributing guidelines.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for mlctl-0.0.5-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | a3811f4b4c48b63b0d52ec4534af102f52981a03b781ad0795e191256d244618 |
|
MD5 | a742417e84c5ff2eda579b4f5c03e301 |
|
BLAKE2b-256 | adea3039b8d227f11f1393bfbc8a344fa074f6fb653f41d063a5f4ad6579cd53 |