Python tools for communication with Hafnia platform.
Project description
Hafnia
The hafnia python package is a collection of tools to create and run model training recipes on
the Hafnia Platform.
The package includes the following interfaces:
cli: A Command Line Interface (CLI) to 1) configure/connect to Hafnia's Training-aaS and 2) create and launch recipe scripts.hafnia: A python package with helper functions to load and interact with sample datasets and an experiment tracker (HafniaLogger).
The Concept: Training as a Service (Training-aaS)
Training-aaS is the concept of training models on the Hafnia platform on large
and hidden datasets. Hidden datasets refers to datasets that can be used for
training, but are not available for download or direct access.
This is a key feature of the Hafnia platform, as a hidden dataset ensures data privacy, and allow models to be trained compliantly and ethically by third parties (you).
The script2model approach is a Training-aaS concept, where you package your custom training
script as a training recipe and use the recipe to train models on the hidden datasets.
To support local development of a training recipe, we have introduced a sample dataset for each dataset available in the Hafnia data library. The sample dataset is a small and anonymized subset of the full dataset and available for download.
With the sample dataset, you can seamlessly switch between local development and Training-aaS. Locally, you can create, validate and debug your training recipe. The recipe is then launched with Training-aaS, where the recipe runs on the full dataset and can be scaled to run on multiple GPUs and instances if needed.
Getting started: Configuration
To get started with Hafnia:
-
Install
hafniawith your favorite python package manager. With pip do this:pip install hafnia -
Sign in to the Hafnia Platform.
-
Create an API KEY for Training aaS. For more instructions, follow this guide. Copy the key and save it for later use.
-
From terminal, configure your machine to access Hafnia:
# Start configuration with hafnia configure # You are then prompted: Profile Name [default]: # Press [Enter] or select an optional name Hafnia API Key: # Pass your HAFNIA API key Hafnia Platform URL [https://api.mdi.milestonesys.com]: # Press [Enter] -
Download
mnistfrom terminal to verify that your configuration is working.hafnia data download mnist --force
Getting started: Loading datasets samples
With Hafnia configured on your local machine, it is now possible to download and explore the dataset sample with a python script:
from hafnia.data import load_dataset
dataset_splits = load_dataset("mnist")
print(dataset_splits)
print(dataset_splits["train"])
The returned sample dataset is a hugging face dataset and contains train, validation and test splits.
An important feature of load_dataset is that it will return the full dataset
when loaded on the Hafnia platform.
This enables seamlessly switching between running/validating a training script
locally (on the sample dataset) and running full model trainings with Training-aaS (on the full dataset).
without changing code or configurations for the training script.
Available datasets with corresponding sample datasets can be found in data library including metadata and description for each dataset.
Getting started: Experiment Tracking with HafniaLogger
The HafniaLogger is an important part of the recipe script and enables you to track, log and
reproduce your experiments.
When integrated into your training script, the HafniaLogger is responsible for collecting:
- Trained Model: The model trained during the experiment
- Model Checkpoints: Intermediate model states saved during training
- Experiment Configurations: Hyperparameters and other settings used in your experiment
- Training/Evaluation Metrics: Performance data such as loss values, accuracy, and custom metrics
Basic Implementation Example
Here's how to integrate the HafniaLogger into your training script:
from hafnia.experiment import HafniaLogger
batch_size = 128
learning_rate = 0.001
# Initialize Hafnia logger
logger = HafniaLogger()
# Log experiment parameters
logger.log_configuration({"batch_size": 128, "learning_rate": 0.001})
# Store checkpoints in this path
ckpt_dir = logger.path_model_checkpoints()
# Store the trained model in this path
model_dir = logger.path_model()
# Log scalar and metric values during training and validation
logger.log_scalar("train/loss", value=0.1, step=100)
logger.log_metric("train/accuracy", value=0.98, step=100)
logger.log_scalar("validation/loss", value=0.1, step=100)
logger.log_metric("validation/accuracy", value=0.95, step=100)
Similar to load_dataset, the tracker behaves differently when running locally or in the cloud.
Locally, experiment data is stored in a local folder .data/experiments/{DATE_TIME}.
In the cloud, the experiment data will be available in the Hafnia platform under experiments.
Example: Torch Dataloader
Commonly for torch-based training scripts, a dataset is used in combination
with a dataloader that performs data augmentations and batching of the dataset as torch tensors.
To support this, we have provided a torch dataloader example script example_torchvision_dataloader.py.
The script demonstrates how to make a dataloader with data augmentation (torchvision.transforms.v2)
and a helper function for visualizing image and labels.
The dataloader and visualization function supports computer vision tasks and datasets available in the data library.
Example: Training-aaS
By combining logging and dataset loading, we can now construct our model training recipe.
To demonstrate this, we have provided a recipe project that serves as a template for creating and structuring training recipes recipe-classification
The project also contains additional information on how to structure your training recipe, use the HafniaLogger, the load_dataset function and different approach for launching
the training recipe on the Hafnia platform.
Detailed Documentation
For more information, go to our documentation page or in below markdown pages.
- CLI - Detailed guide for the Hafnia command-line interface
- Script2Model Documentation - Detailed guide for script2model
- Release lifecycle - Details about package release lifecycle.
Development
For development, we are using an uv based virtual python environment.
Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
Install python dependencies including developer (--dev) and optional dependencies (--all-extras).
uv sync --all-extras --dev
Run tests:
uv run pytest tests
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file hafnia-0.1.23.tar.gz.
File metadata
- Download URL: hafnia-0.1.23.tar.gz
- Upload date:
- Size: 166.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8b925b3b7c34c70ae624e57726d9ea858619126a434eb65e63b12b9428c128e5
|
|
| MD5 |
ada67cb19beb57fb3b359e4e33035419
|
|
| BLAKE2b-256 |
9ecebb99f0a6bc9de5cee6143f65037294d038676e10fb6b1f2d1ad21e0ea2f1
|
File details
Details for the file hafnia-0.1.23-py3-none-any.whl.
File metadata
- Download URL: hafnia-0.1.23-py3-none-any.whl
- Upload date:
- Size: 28.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
535ff5c5f38103bc84276222a8ff32c27e87fd412111f5812ceac2956a821390
|
|
| MD5 |
b97a47439432ea05658e097c715da639
|
|
| BLAKE2b-256 |
cf02143a51970a4a93910b8d8b1e9df30bb90c38adac9635f0717f80f1cd2c37
|