mlctl is the control plane for MLOps. It provides a CLI and a Python SDK for supporting key operations related to MLOps.
Project description
mlctl
mlctl
is the Command Line Interface (CLI)/Software Development Kit (SDK) for MLOps. It allows for all ML Lifecycle operations, such as Training
, Deployment
etc. to be controlled via a simple-to-use command line interface. Additionally, mlctl
provides a SDK for use in a notebook environment and employs an extensible mechanism for plugging in various back-end providers, such as SageMaker.
The following ML Lifecycle operations are currently supported via mlctl
train
- operations related to model traininghost
- operations related to hosting a model for online inferencebatch inference
- operations for running model inference in a batch method
Getting Started
Installation
-
(Optional) Create a new virtual environment for
mlctl
pip install virtualenv virtualenv ~/envs/mlctl source ~/envs/mlctl/bin/activate
-
Install
mlctl
:pip install mlctl
-
Upgrade an existing version:
pip install --upgrade mlctl
Usage
Optional Setup
mlctl
requires users to specify the plugin and a profile/credentials file for authenticating operations. These values can either be stored as environment variables as shown below OR they can be passed as command line options. Use --help
for more details.
```
export PLUGIN=
export PROFILE=
```
Commands
mlctl
CLI commands have the following structure:
mlctl <command> <subcommand> [OPTIONS]
To view help documentation, run the following:
mlctl --help
mlctl <command> --help
mlctl <command> <subcommand> --help
Initialize ML Model
mlctl init [OPTIONS]
Options | Description |
---|---|
template or -t | (optional) Location of the project template github location. |
Training Commands
mlctl train <subcommand> [OPTIONS]
Subcommand | Description |
---|---|
start | train a model |
stop | stop an ongoing training job |
info | get training job information |
Hosting Commands
mlctl hosting <subcommand> [OPTIONS]
Subcommand | Description |
---|---|
create | create a model from trained model artifact |
deploy | deploy a model to create an endpoint for inference |
undeploy | undeploy a model |
info | get endpoint information |
Batch Inference Commands
mlctl batch <subcommand> [OPTIONS]
Subcommand | Description |
---|---|
start | perform batch inference |
stop | stop an ongoing batch inference |
info | get batch inference information |
Examples
Contributing
For information on how to contribute to mlctl
, please read through the contributing guidelines.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for mlctl-0.0.6.dev1-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 437ac600eeb18b09b58f815f75d106a189fca1c1c567d93c8b3b827447114279 |
|
MD5 | 9345c2decc649e4ead65efcc470df98d |
|
BLAKE2b-256 | fc1fa33a407b086af2e266921a2ec97c2aece4a2d8c5f006aae9540ed7f9ce7a |