Skip to main content

This package contains the core components and protocols for creating, managing, and registering federated learning models using MLflow. It provides utilities for defining local learners, aggregation strategies, and integrating them with MLflow for tracking and deployment.

Project description

model-store-interface

Table of Contents

  1. Upload a Federated Learning Model to the Federated Platform
  2. Features provided by the package
  3. Directory Structure and File Descriptions
  4. Prerequisites
  5. Installation
  6. Walkthrough: How to Implement a Federated Model
  7. Deployment of the package

Upload a Federated Learning Model to the Federated Platform

Federated Learning Model

This library provides utilities for creating, managing, and registering federated learning models to the Federated Platform. It encapsulates a local learner and an aggregator into a single FederatedModel object and provides a function to register the model in a Federated Platform Model Catalogue with appropriate credentials and metadata.

The package uses the following libraries internally:

  • MLflow: For model tracking and registry. More information can be found here.
  • Flower: For federated learning framework. More information can be found here. The user needs do work with objects originating from these libraries to upload a Federated Model.

Features provided by the package

  • Create a custom FederatedModel:

    • Create a local learner, a ML model that will be executed on edge nodes with custom training, evaluation, and parameter management methods.
    • Define and integrate your custom aggregation strategy, with a default implementation of plain averaging (DefaultAggregator) for both parameters and metrics.
  • Upload it to the Model Catalogue of the Federated Platform:

    • Log the FL Model and its metadata to the Federated Platform using the submit_fl_model method.

Directory Structure and File Descriptions

This repository contains the core components and protocols for the upload of a model to the Federated Platform. They are stored in src/model_store_interface directory and you can find a detailed description here README.md.


Prerequisites

  • mypy: For static type checking and ensuring compatibility with the protocols.
  • uv: For developing purposes. You can install it by following the instructions here.

Installation

Follow the steps below to install and set up the uploading environment:

  1. Create a virtual environment with python verion 3.11.* and install in it the package with the following command:

    pip install --index-url https://pypi.synthema.rid-intrasoft.eu/simple model-store-interface[edge]
    

    will be asked to furnish username and password that will be provided to users with access permission. (for now use the dev user to access the private PyPI server)

  2. Go to the directory from where you want to upload the model and run this command to initialize it (pay attention to files that might get overwritten):

    msi init
    

    A src folder will be created where the dependencies files must be stored, an example.py script will be created in the main directory with an example of how to upload the model and a README.md file with a description of how to use the package functionalities. They are provided with clear documentation on how to define your local learner and aggregation strategy, and how to log the model to the Platform Model Catalogue


Walkthrough: How to Implement a Federated Model

Step 1: Define Your Local Learner

Local Learner

To create a custom local learner, implement your model according to the LLProtocol. Your model class should have the following structure:

Methods

  • prepare_data(data: pd.DataFrame) -> None:

    • Purpose: Prepares the input data for training or evaluation.
    • Arguments:
      • data: A pandas DataFrame containing the input data.
    • Returns: None.
  • train_round() -> flwr.common.MetricsRecord:

    • Purpose: Performs the training process for the local learner.
    • Arguments: None.
    • Returns: A flwr.common.MetricsRecord containing metrics collected during training.
  • get_parameters() -> flwr.common.ParametersRecord:

    • Purpose: Retrieves the model's current parameters for aggregation.
    • Arguments: None.
    • Returns: A flwr.common.ParametersRecord representing the current model parameters.
  • set_parameters(parameters: flwr.common.ParametersRecord) -> None:

    • Purpose: Updates the model's parameters with the provided values.
    • Arguments:
      • parameters: A flwr.common.ParametersRecord containing the parameters to be set.
    • Returns: None.
  • evaluate() -> flwr.common.MetricsRecord:

    • Purpose: Evaluates the model's performance on validation or test data.
    • Arguments: None.
    • Returns: A flwr.common.MetricsRecord containing metrics from the evaluation.

NB: Any dependency needed alongside the model must be stored inside the src/ directory and referenced from there.


Step 2: Incapsulate local learner into a function

A function must be created according to the LLFactoryProtocol. The function must contain the definition of the model class and it must return an instance of the model itself. Also the function must import all the packages necessary to the local learner with the Lazy Imports strategy. Here is an example:

# Incapsulating function
def create_aggregator():
    import torch # import all the packages necessary to the local learner

    # Definition of the local learner as in step 1
    class CustomLocalLearner(nn.Module):
      '''Local learner according LLProtocol'''
      ...
  
    return CustomLocalLearner()

Step 3: Define Your Aggregation Strategy

Aggregator

To implement a custom aggregation strategy, follow the AggProtocol. The strategy class should have the following structure:

Methods

  • aggregate_parameters(results: list[flwr.common.ParametersRecord], config: Optional[flwr.common.ConfigsRecord]=None) -> flwr.common.ParametersRecord:

    • Purpose: Aggregates a list of parameter records from multiple clients into a single set of parameters.
    • Arguments:
      • results: A list of flwr.common.ParametersRecord objects, each representing the parameters from a client.
    • Returns: A flwr.common.ParametersRecord containing the aggregated parameters.
  • aggregate_metrics(results: list[flwr.common.MetricsRecord], config: Optional[flwr.common.ConfigsRecord]=None) -> flwr.common.MetricsRecord:

    • Purpose: Aggregates a list of metrics records from multiple clients into a single set of metrics.
    • Arguments:
      • results: A list of flwr.common.MetricsRecord objects, each representing the metrics from a client.
    • Returns: A flwr.common.MetricsRecord containing the aggregated metrics.

Step 4: Incapsulate local learner into function

As for the local learner,a function must be created according to the AggFactoryProtocol. The function must contain the definition of the aggregator class and it must return an instance of the aggregator itself. Also the function must import all the packages necessary to the class with the Lazy Imports strategy. Here is an example:

# Incapsulating function
def create_aggregator():
    import numpy as np # import all the packages necessary to the aggregator

    # Definition of the aggregator as in step 3
    class CustomAggregator():
      '''Aggregator according AggProtocol'''
      ...
  
    return CustomAggregator()

Step 5: Create the FederatedModel to include both local learner and aggregator

The local learner and the aggregator must be included in the same FederatedModel class. The model-store-interface package provides the FederatedModelclass which recieves as arguments the function creating the local learner and the function creating the aggregator with their respective names. If no aggregation strategy is provided, the model will by default use a plain averaging strategy for both parameters and metrics, and if also the local learner is not provided the model will get a default local learner which is shown in example.py. Here is an example of how to set up the FederatedModel:

from model_store_interface import FederatedModel

# Define your local learner and aggregator
def create_local_learner():
    '''Create local learner according to LLProtocol'''
    return CustomLocalLearner()
    

def create_aggregator():
    '''Create aggregator according to AggProtocol'''
    return CustomAggregator()

# Create the FederatedModel
federated_model = FederatedModel(create_local_learner=create_local_learner,
                           model_name="your_model_name",
                           create_saggregator=create_aggregator,
                           aggregator_name="your_aggregator_name")

NB: Make sure your model and aggregation strategy are compatible with static type checking tools like MyPy. This will help catch any issues related to the implementation of the protocols.

Step 6: Submit the model to the Platform Model Catalogue

Upload the model created with the submit_fl_model function provided by the package. To successfully upload the model the user must provide to the function the platform url to which upload the model, a valid set of username and password, the name of the experiment (if it doesn't exist already a new experiment is created with that name) and some tags related to the model the user is uploading. Here is an example:

from model_store_interface import submit_fl_model

# Submit the FederatedModel to the Platform
submit_fl_model(federated_model, 
                platform_url="platform_model_registry_url"
                username="your_username", 
                password="your_password",
                experiment_name="your_experiment_name",
                disease="your_disease", # The use case the model is used for ("AML" or "SCD")
                trained=False) # Whether the local learner is trained or not

Deployment of the package

To deploy and make changes to the package functionalities, you need to have uv installed. You can install it by following the instructions here.

Once you have uv installed, follow these steps:

  1. Clone the repository to your local machine:

    git clone https://github.com/synthema-project/app-model_store-interface.git
    cd model-store-interface
    
  2. Sync the repository using uv sync:

    uv sync
    

This command will synchronize your local repository with the remote repository, allowing you to make changes to the package functionalities. After making your changes, you can propose them by creating a pull request on the GitHub repository.

By using uv sync, you can ensure that your local changes are in sync with the latest version of the repository, making it easier to collaborate with other developers and contribute to the project.


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

model_store_interface-0.2.0.tar.gz (441.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

model_store_interface-0.2.0-py3-none-any.whl (19.3 kB view details)

Uploaded Python 3

File details

Details for the file model_store_interface-0.2.0.tar.gz.

File metadata

  • Download URL: model_store_interface-0.2.0.tar.gz
  • Upload date:
  • Size: 441.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.3

File hashes

Hashes for model_store_interface-0.2.0.tar.gz
Algorithm Hash digest
SHA256 ea562f8daba1cf8dfbf907decd4d2363626726136adbd880699feb3ff96879db
MD5 f3ea714f39e35aa234a56191811eab48
BLAKE2b-256 0fd2acad1d92139a923974c632e85cd3663a462e36e18ef3e391042d3062c357

See more details on using hashes here.

File details

Details for the file model_store_interface-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for model_store_interface-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1cf3456b41767c915e55f7d588253030ea115cc4433ed322fab766e10030eae1
MD5 3625d87a9a1215a2b51aa3c6dc159c09
BLAKE2b-256 fc8d866acd72475f4c6ec4e8429069e78d6782f2e7f204604a6cb679430614be

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page