Client for Kubeflow Model Registry
Project description
Model Registry Python Client
This library provides a high level interface for interacting with a model registry server.
Alpha
This Kubeflow component has alpha status with limited support. See the Kubeflow versioning policies. The Kubeflow team is interested in your feedback about the usability of the feature.
Installation
In your Python environment, you can install the latest version of the Model Registry Python client with:
pip install --pre model-registry
Installing extras
Some capabilities of this Model Registry Python client, such as importing model from Hugging Face, require additional dependencies.
By installing an extra variant of this package the additional dependencies will be managed for you automatically, for instance with:
pip install --pre "model-registry[hf]"
This step is not required if you already installed the additional dependencies already, for instance with:
pip install huggingface-hub
Basic usage
Connecting to MR
You can connect to a secure Model Registry using the default constructor (recommended):
from model_registry import ModelRegistry
registry = ModelRegistry("https://server-address", author="Ada Lovelace") # Defaults to a secure connection via port 443
Or you can set the is_secure
flag to False
to connect without TLS (not recommended):
registry = ModelRegistry("http://server-address", 8080, author="Ada Lovelace", is_secure=False) # insecure port set to 8080
Registering models
To register your first model, you can use the register_model
method:
model = registry.register_model(
"my-model", # model name
"https://storage-place.my-company.com", # model URI
version="2.0.0",
description="lorem ipsum",
model_format_name="onnx",
model_format_version="1",
storage_key="my-data-connection",
storage_path="path/to/model",
metadata={
# can be one of the following types
"int_key": 1,
"bool_key": False,
"float_key": 3.14,
"str_key": "str_value",
}
)
model = registry.get_registered_model("my-model")
print(model)
version = registry.get_model_version("my-model", "2.0.0")
print(version)
experiment = registry.get_model_artifact("my-model", "2.0.0")
print(experiment)
You can also update your models:
# change is not reflected on pushed model version
version.description = "Updated model version"
# you can update it using
registry.update(version)
Importing from S3
When registering models stored on S3-compatible object storage, you should use utils.s3_uri_from
to build an
unambiguous URI for your artifact.
from model_registry import utils
model = registry.register_model(
"my-model", # model name
uri=utils.s3_uri_from("path/to/model", "my-bucket"),
version="2.0.0",
description="lorem ipsum",
model_format_name="onnx",
model_format_version="1",
storage_key="my-data-connection",
metadata={
# can be one of the following types
"int_key": 1,
"bool_key": False,
"float_key": 3.14,
"str_key": "str_value",
}
)
Importing from Hugging Face Hub
To import models from Hugging Face Hub, start by installing the huggingface-hub
package, either directly or as an
extra (available as model-registry[hf]
). Reference section "installing extras" above for
more information.
Models can be imported with
hf_model = registry.register_hf_model(
"hf-namespace/hf-model", # HF repo
"relative/path/to/model/file.onnx",
version="1.2.3",
model_name="my-model",
description="lorem ipsum",
model_format_name="onnx",
model_format_version="1",
)
There are caveats to be noted when using this method:
- It's only possible to import a single model file per Hugging Face Hub repo right now.
Listing models
To list models you can use
for model in registry.get_registered_models():
... # your logic using `model` loop variable here
# and versions associated with a model
for version in registry.get_model_versions("my-model"):
... # your logic using `version` loop variable here
Advanced usage note: You can also set the
page_size()
that you want the Pager to use when invoking the Model Registry backend. When using it as an iterator, it will automatically manage pages for you.
Implementation notes
The pager will manage pages for you in order to prevent infinite looping. Currently, the Model Registry backend treats model lists as a circular buffer, and will not end iteration for you.
Development
Common tasks, such as building documentation and running tests, can be executed using nox
sessions.
Use nox -l
to list sessions and execute them using nox -s [session]
.
Alternatively, use make install
to setup a local Python virtual environment with poetry
.
To run the tests you will need docker
(or equivalent) and the compose
extension command.
This is necessary as the test suite will manage a Model Registry server and an MLMD instance to ensure a clean state on
each run.
You can use make test
to execute pytest
.
Running Locally on Mac M1 or M2 (arm64 architecture)
Check out our recommendations on setting up your docker engine on an ARM processor.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for model_registry-0.2.8a1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | cf93f45226c3b3526258c9792cf77a66d85a99cf6e52d2ebead31b7112c74267 |
|
MD5 | d324f478a318fab367ce93732a8d44f2 |
|
BLAKE2b-256 | 36d7b35474d4b4d800839eb2414f61e0149fbdad25498d052218cae64599ea2c |