Skip to main content

A base library for ML training (supervised) with environment setup and logging.

Project description

Machine Learning Training Base (ml-training-base)

ml-training-base is a Python package providing base classes and utilities for machine learning projects. Currently, the package only supports supervised learning.

It includes:

  • A configurable logging setup for both console and file outputs.
  • Base classes for data loaders (BaseSupervisedDataLoader).
  • An environment setup class for deterministic training (TrainingEnvironment), ensuring reproducible runs.
  • A base trainer class (BaseSupervisedTrainer) that outlines a typical training workflow in supervised learning.

By using these abstractions, you can quickly spin up a new ML pipeline with consistent structure and easily extend or override specific components to suit your needs.

Table of Contents

  1. Features
  2. Installation
  3. Quick Start
  4. Package Structure
  5. Configuration File
  6. License

Features

  • Reusable Base Classes: Standard building blocks for data loading, training, callbacks, and environment management.
  • Logging Utilities: Automatically configure logging to both console and file, with customizable logging paths.
  • Deterministic Environment Setup: Control Python, NumPy, and TensorFlow seeds for reproducible ML experiments.
  • Clear Project Structure: Easily extend or override abstract methods in your own data loaders, trainers, or environment logic.

Installation

You can install this package locally via:

pip install ml-training-base

Quick Start

  1. Install the package and its dependencies.
  2. Create a YAML configuration file (e.g. config.yaml) with your environment, logging, and data settings.
  3. Import the classes in your script or Jupyter notebook:
import logging
from ml_training_base.data.utils.logging_utils import cconfigure_single_level_logger
from ml_training_base.supervised.environments.base_training_environments import KerasTrainingEnvironment
from ml_training_base.supervised.trainers.base_supervised_trainers import BaseSupervisedTrainer
  1. Set up your environment and trainer:
# For example, a custom trainer that inherits from BaseSupervisedTrainer
class MyCustomTrainer(BaseSupervisedTrainer):
    def _setup_model(self):
        # Initialize your model here, e.g., a TensorFlow/Keras or PyTorch model
        pass

    def _build_model(self):
        # Compile or build your model
        pass

    def _setup_callbacks(self):
        # Setup your training callbacks, checkpointing, etc.
        pass

    def _train(self):
        # Implement your training loop or model.fit(...) call
        pass

    def _save_model(self):
        # Save trained model to disk
        pass

    def _evaluate(self):
        # Evaluate your model on the test set
        pass

# Usage:
trainer = MyCustomTrainer(
    config_path="path/to/config.yaml",
    training_env=KerasTrainingEnvironment(logger=logging.getLogger(__name__))
)
trainer.run()

Package Structure

ml-training-base/
├── pyproject.toml
├── src/
│   └── ml_training_base/
│       ├── __init__.py
│       ├── data/
│       │   └── preprocessing/
│       │       ├── __init__.py
│       │       ├── configure_utils.py
│       │       ├── files_utils.py
│       │       └── logging_utils.py
│       ├── supervised/
│       │    ├── __init__.py
│       │    ├── data/
│       │    │   ├── __init__.py
│       │    │   └── base_supervised_data_loader.py
│       │    ├── environments/
│       │    │   ├── __init__.py
│       │    │   └── base_training_environments.py
│       │    ├── trainers/
│       │    │   ├── __init__.py
│       │    │   └── base_supervised_trainers.py
│       │    └── utils/
│       │        └── data/
│       │            ├── __init__.py
│       │            └── base_supervised_data_loader.py
│       └── utils/
│           ├── __init__.py
│           ├── configure_utils.py
│           ├── files_utils.py
│           └── logging_utils.py
├── tests/
│   ├── data/
│   │   └── preprocessing/
│   │       └── test_base_data_preprocessor.py
│   ├── supervised/
│   │   ├── data/
│   │   │   └── test_base_supervised_data_loader.py
│   │   ├── environments/
│   │   │   └── test_base_training_environments.py
│   │   └── trainers/
│   │       └── test_base_supervised_trainers.py
│   └── utils
│       ├── test_configure_utils.py
│       ├── test_files_utils.py
│       └── test_logging_utils.py
├── README.md
├── LICENSE
└── pyproject.toml

Key Modules

  • data/utils/logging_utils.py:
    • Contains logger utilities, which sets up a standardized console and file logger for use throughout the package. File logger writes to a single file for all log levels.
    • configure_single_level_logger(): File logger that writes to a single file for all log levels.
    • configure_multi_level_logger: File logger that writes to separate files for each log level.
  • supervised/environments/base_training_environments.py:
    • Defines the BaseEnvironment abstract class for handling environment setup.
    • Provides concrete, framework-specific implementations like KerasTrainingEnvironment and PyTorchTrainingEnvironment that manage deterministic setup (setting seeds, configuring hardware options, etc.).
  • supervised/trainers/base_supervised_trainers.py:
    • Contains the core training framework hierarchy.
    • BaseSupervisedTrainer: The framework-agnostic abstract class that defines the training pipeline (run(), _setup_model(), _train(), etc.).
    • BaseKerasSupervisedTrainer & BasePyTorchSupervisedTrainer: Framework-specific abstract classes that implement common boilerplate for Keras (model.fit()) and PyTorch (manual training loop).
  • supervised/utils/data/base_supervised_data_loader.py:
    • Contains the BaseSupervisedDataLoader abstract class. This defines the contract for creating data preparation pipelines (setup_datasets(), get_train_dataset(), etc.) that are used by the trainers.

Configuration File

You can define your runtime settings (e.g., logger paths, environment determinism seeds, model hyperparameters) in a YAML file.

For example:

# Data Configuration and Hyperparameters
data:
  x_data_path: 'data/processed/x_data'
  y_data_path: 'data/processed/y_data'
  logger_path: 'var/log/training.log'
  batch_size: 32
  test_split: 0.1
  validation_split: 0.1

# Model Configuration and Hyperparameters
model:
  attention_dim: 512
  encoder_embedding_dim: 512
  decoder_embedding_dim: 512
  units: 512
  encoder_num_layers: 2
  decoder_num_layers: 4

# Training Configuration and Hyperparameters
training:
  epochs: 100
  early_stop_patience: 5
  weight_decay: null
  dropout_rate: 0.2
  learning_rate: 1e-4

# Environment Configuration
env:
  determinism:
    python_seed: "44478977"
    random_seed: 440651
    numpy_seed: 110789
    tf_seed: 61592

License

This project is licensed under the terms of the MIT License. Feel free to copy, modify, and distribute per its terms.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ml_training_base-0.4.0.tar.gz (18.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ml_training_base-0.4.0-py3-none-any.whl (19.8 kB view details)

Uploaded Python 3

File details

Details for the file ml_training_base-0.4.0.tar.gz.

File metadata

  • Download URL: ml_training_base-0.4.0.tar.gz
  • Upload date:
  • Size: 18.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for ml_training_base-0.4.0.tar.gz
Algorithm Hash digest
SHA256 914b95e5d2cebeee40dfd2a1f16641c7acca73406874bde1123107bc044de161
MD5 06ae8939f563ab3a5e6d45601a00e6ee
BLAKE2b-256 957a645e55839e29c8cd63d7a786d2c812b63c30fb6dcef13cf8003ca4737a21

See more details on using hashes here.

Provenance

The following attestation bundles were made for ml_training_base-0.4.0.tar.gz:

Publisher: publish-to-pypi.yml on c-vandenberg/ml-training-base

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ml_training_base-0.4.0-py3-none-any.whl.

File metadata

File hashes

Hashes for ml_training_base-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ec8b4ac676d1a3ab335abdbd6f2a246b8e519f93c0d550daa93e91b2111fecd0
MD5 9188e403c3714afcc6812665cc8078de
BLAKE2b-256 5404a1106c658ef57f116fb785abb0dfd482ce26ff728b86382158f772c33f52

See more details on using hashes here.

Provenance

The following attestation bundles were made for ml_training_base-0.4.0-py3-none-any.whl:

Publisher: publish-to-pypi.yml on c-vandenberg/ml-training-base

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page