Client for Red Hat OpenDataHub Model Registry
Project description
Model Registry Python Client
This library provides a high level interface for interacting with a model registry server.
Basic usage
from model_registry import ModelRegistry
registry = ModelRegistry(server_address="server-address", port=9090, author="author")
model = registry.register_model(
"my-model", # model name
"s3://path/to/model", # model URI
version="2.0.0",
description="lorem ipsum",
model_format_name="onnx",
model_format_version="1",
storage_key="aws-connection-path",
storage_path="path/to/model",
metadata={
# can be one of the following types
"int_key": 1,
"bool_key": False,
"float_key": 3.14,
"str_key": "str_value",
}
)
model = registry.get_registered_model("my-model")
version = registry.get_model_version("my-model", "v2.0")
experiment = registry.get_model_artifact("my-model", "v2.0")
Default values for metadata
If not supplied, metadata
values defaults to a predefined set of conventional values.
Reference the technical documentation in the pydoc of the client.
Importing from Hugging Face Hub
To import models from Hugging Face Hub, start by installing the huggingface-hub
package, either directly or as an
extra (available as model-registry[hf]
).
Models can be imported with
hf_model = registry.register_hf_model(
"hf-namespace/hf-model", # HF repo
"relative/path/to/model/file.onnx",
version="1.2.3",
model_name="my-model",
description="lorem ipsum",
model_format_name="onnx",
model_format_version="1",
)
There are caveats to be noted when using this method:
-
It's only possible to import a single model file per Hugging Face Hub repo right now.
-
If the model you want to import is in a global namespace, you should provide an author, e.g.
hf_model = registry.register_hf_model( "gpt2", # this model implicitly has no author "onnx/decoder_model.onnx", author="OpenAI", # Defaults to unknown in the absence of an author version="1.0.0", description="gpt-2 model", model_format_name="onnx", model_format_version="1", )
Development
Common tasks, such as building documentation and running tests, can be executed using nox
sessions.
Use nox -l
to list sessions and execute them using nox -s [session]
.
Running Locally on Mac M1 or M2 (arm64 architecture)
If you want run tests locally you will need to set up a colima develeopment environment using the instructions here
You will also have to change the package source to one compatible with ARM64 architecture. This can be actioned by uncommenting lines 14 or 15 in the pyproject.toml file. Run the following command after you have uncommented the line.
poetry lock
Use the following commands to directly run the tests with individual test output. Alternatively you can use the nox session commands above.
poetry install
poetry run pytest -v
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for model_registry-0.1.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 816ba0e36ba221c0c66627e5e91b770f0f98d51a6e1cae31ba20448c45b3c0f0 |
|
MD5 | 308c2cac05032b533c76a556a40d4990 |
|
BLAKE2b-256 | b83cedf5a388431973f8d276e52d867f8e3dc122db3a0764e1723fbfcdcc0369 |