Client for Kubeflow Model Registry
Project description
Model Registry Python Client
This library provides a high level interface for interacting with a model registry server.
Basic usage
from model_registry import ModelRegistry
registry = ModelRegistry("server-address", author="Ada Lovelace") # Defaults to a secure connection via port 443
# registry = ModelRegistry("server-address", 1234, author="Ada Lovelace", is_secure=False) # To use MR without TLS
model = registry.register_model(
"my-model", # model name
"https://storage-place.my-company.com", # model URI
version="2.0.0",
description="lorem ipsum",
model_format_name="onnx",
model_format_version="1",
storage_key="my-data-connection",
storage_path="path/to/model",
metadata={
# can be one of the following types
"int_key": 1,
"bool_key": False,
"float_key": 3.14,
"str_key": "str_value",
}
)
model = registry.get_registered_model("my-model")
version = registry.get_model_version("my-model", "v2.0")
experiment = registry.get_model_artifact("my-model", "v2.0")
Importing from S3
When registering models stored on S3-compatible object storage, you should use utils.s3_uri_from
to build an
unambiguous URI for your artifact.
from model_registry import ModelRegistry, utils
registry = ModelRegistry(server_address="server-address", port=9090, author="author")
model = registry.register_model(
"my-model", # model name
uri=utils.s3_uri_from("path/to/model", "my-bucket"),
version="2.0.0",
description="lorem ipsum",
model_format_name="onnx",
model_format_version="1",
storage_key="my-data-connection",
metadata={
# can be one of the following types
"int_key": 1,
"bool_key": False,
"float_key": 3.14,
"str_key": "str_value",
}
)
Importing from Hugging Face Hub
To import models from Hugging Face Hub, start by installing the huggingface-hub
package, either directly or as an
extra (available as model-registry[hf]
).
Models can be imported with
hf_model = registry.register_hf_model(
"hf-namespace/hf-model", # HF repo
"relative/path/to/model/file.onnx",
version="1.2.3",
model_name="my-model",
description="lorem ipsum",
model_format_name="onnx",
model_format_version="1",
)
There are caveats to be noted when using this method:
-
It's only possible to import a single model file per Hugging Face Hub repo right now.
-
If the model you want to import is in a global namespace, you should provide an author, e.g.
hf_model = registry.register_hf_model( "gpt2", # this model implicitly has no author "onnx/decoder_model.onnx", author="OpenAI", # Defaults to unknown in the absence of an author version="1.0.0", description="gpt-2 model", model_format_name="onnx", model_format_version="1", )
Development
Common tasks, such as building documentation and running tests, can be executed using nox
sessions.
Use nox -l
to list sessions and execute them using nox -s [session]
.
Running Locally on Mac M1 or M2 (arm64 architecture)
If you want run tests locally you will need to set up a colima develeopment environment using the instructions here
You will also have to change the package source to one compatible with ARM64 architecture. This can be actioned by uncommenting lines 14 or 15 in the pyproject.toml file. Run the following command after you have uncommented the line.
poetry lock
Use the following commands to directly run the tests with individual test output. Alternatively you can use the nox session commands above.
poetry install
poetry run pytest -v
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for model_registry-0.2.1a1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | db1981acf8841d5edd10d500caf944332d11b790e8326890efdc43e576b9ed66 |
|
MD5 | 6e8e3a9c775d0d620e1a3598fa6c2614 |
|
BLAKE2b-256 | 140a42067e92e1f1220e1c7c74ea250550b23079cfae58995a3fb0b953791c1b |