Skip to main content

Register, retrieve and get metadata from machine learning models.

Project description

ml-registry

Register, manage, and track machine learning components easily, such as PyTorch models and optimizers. You can retrieve component metadata, inspect signatures, and ensure instance integrity through deterministic hashes.

Introduction

Tracking machine learning components can be challenging, especially when you have to name them, track their parameters, and ensure the instance you're using matches the one you trained. This library addresses these issues by providing a simple way to register, manage, and track machine learning components, such as models, optimizers, and datasets. It uses cryptographic hashes to create unique identifiers for components based on their names, signatures, and parameters.

Installation

Install the package with pip:

pip install mlregistry

Using conda:

conda install pip
pip install mlregistry

Example

Suppose you have a Perceptron model built with PyTorch. To start using the registry, import the Registry class and register the class you want to track:

from models import Perceptron
from mlregistry import Registry

# Register components
Registry.register(Perceptron)

The Registry class injects a metadata factory into the Perceptron model. This metadata includes:

  • Model name: Used to retrieve the model instance from the registry and recognize it during serialization.
  • Unique hash: Useful for identifying the model instance locally, based on the model’s name, signature, and constructor parameters.
  • Arguments: A tuple with positional and keyword arguments for reconstructing the model instance.
  • Signature: Includes model annotations, which is useful for exposing the model’s configuration and usage in request-response APIs.
from mlregistry import get_metadata, get_hash, get_signature

registry = Registry() #Create a registry instance before or after registry of classes. 
perceptron = Perceptron(784, 256, 10, p=0.5, bias=True)

# Get metadata, hash, and signature of the model instance
hash = get_hash(perceptron)
print(hash)  # e.g., "1a79a4d60de6718e8e5b326e338ae533"

metadata = get_metadata(perceptron)
print(metadata.name)  # Perceptron
print(metadata.args)  # (784, 256, 10)
print(metadata.kwargs)  # {'p': 0.5, 'bias': True}

signature = get_signature(perceptron)
print(signature)  # {input_size: int, hidden_size: int, output_size: int, p: float, bias: bool}

You can retrieve the model type from the registry:

model_type = registry.get('Perceptron')
model_instance = model_type(input_size=784, hidden_size=256, output_size=10, p=0.5, bias=True)

assert isinstance(model_instance, Perceptron)

This works with other components as well, like optimizers and datasets. For complex setups, consider creating a repository class to manage components and dependencies, simplifying pipeline persistence.

from torch.nn import Module, CrossEntropyLoss
from torch.optim import Optimizer, Adam
from torchvision.datasets import MNIST

class Repository:
    models = Registry[Module]()
    criterions = Registry[Module]()
    optimizers = Registry[Optimizer](excluded_positions=[0], exclude_parameters={'params'})
    datasets = Registry[Dataset](excluded_positions=[0], exclude_parameters={'root', 'download'})

Repository.models.register(Perceptron)
Repository.optimizers.register(Adam)
Repository.datasets.register(MNIST)

model = Perceptron(784, 256, 10, p=0.5, bias=True)
criterion = CrossEntropyLoss()
optimizer = Adam(model.parameters(), lr=1e-3)
dataset = MNIST('data', train=True, download=True)

dataset_metadata = get_metadata(dataset)
print(dataset_metadata)  # Serialize dataset metadata

optimizer_metadata = get_metadata(optimizer)
print(optimizer_metadata)  # Excluded parameters like 'params' or the first positional argument won’t appear in metadata

This approach enables component tracking and serialization without worrying about naming conflicts or manual parameter tracking.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mlregistry-0.2.0.tar.gz (4.5 kB view details)

Uploaded Source

Built Distribution

mlregistry-0.2.0-py3-none-any.whl (5.2 kB view details)

Uploaded Python 3

File details

Details for the file mlregistry-0.2.0.tar.gz.

File metadata

  • Download URL: mlregistry-0.2.0.tar.gz
  • Upload date:
  • Size: 4.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.12.1 Linux/6.5.0-1025-azure

File hashes

Hashes for mlregistry-0.2.0.tar.gz
Algorithm Hash digest
SHA256 c09320c18efebd990f7c7e56aa7981d613fd640b9b79d8562c27744cb514874d
MD5 a4df230137956f3d69825ac42eddd765
BLAKE2b-256 4ef7bdd88c6125687c4f9d3f574aa8b0a6dd4587c2c7b43f05c5ef51582c2338

See more details on using hashes here.

File details

Details for the file mlregistry-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: mlregistry-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 5.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.12.1 Linux/6.5.0-1025-azure

File hashes

Hashes for mlregistry-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 654d5ae2843bad9cb5045671e56d73e5669c4de12f07828dc87f6a737b88a615
MD5 e1b0ec7249055a6538f5824b9513c18c
BLAKE2b-256 4a4c0eec04b45a2ab24fbbf7e383ddf8f90a6af8efa289e09c52134cc3171d62

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page