Skip to main content

No project description provided

Project description

Trainer Tools

A lightweight, hook-based training loop for PyTorch. trainer-tools abstracts away the boilerplate of training loops while remaining fully customizable via a powerful flexible hook system.

Features

  • Hook System: Customize every step of the training lifecycle (before/after batch, step, epoch, fit).
  • Built-in Integrations: Comes with hooks for wandb or trackio, Progress Bar, and Checkpointing.
  • Optimization: Easy Automatic Mixed Precision (AMP), Gradient Accumulation, and Gradient Clipping.
  • Metrics: robust metric tracking and logging to JSONL or external trackers.
  • Memory Profiling: Built-in tools to debug CUDA memory leaks.

Installation

pip install trainer-tools

Quick Start

Here is a minimal example of training a simple model:

import torch
import torch.nn as nn
from torch.utils.data import DataLoader, TensorDataset
from trainer_tools.trainer import Trainer
from trainer_tools.hooks import MetricsHook, Accuracy, Loss, ProgressBarHook

# 1. Prepare Data
x = torch.randn(100, 10)
y = torch.randint(0, 2, (100,))
ds = TensorDataset(x, y)
dl = DataLoader(ds, batch_size=32)

# 2. Define Model
model = nn.Sequential(nn.Linear(10, 2))
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)

# 3. Setup Hooks
metrics = MetricsHook(metrics=[Accuracy(), Loss()])
pbar = ProgressBarHook()

# 4. Train
trainer = Trainer(
    model=model,
    train_dl=dl,
    valid_dl=dl,
    optim=optimizer,
    loss_func=nn.CrossEntropyLoss(),
    epochs=5,
    hooks=[metrics, pbar],
    device="cuda" if torch.cuda.is_available() else "cpu"
)

trainer.fit()

The Hook System

trainer-tools relies on BaseHook. You can create custom behavior by subclassing it:

from trainer_tools.hooks import BaseHook

class MyCustomHook(BaseHook):
    def after_step(self, trainer):
        if trainer.step % 100 == 0:
            print(f"Current Loss: {trainer.loss}")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

trainer_tools-0.1.1.tar.gz (54.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

trainer_tools-0.1.1-py3-none-any.whl (22.0 kB view details)

Uploaded Python 3

File details

Details for the file trainer_tools-0.1.1.tar.gz.

File metadata

  • Download URL: trainer_tools-0.1.1.tar.gz
  • Upload date:
  • Size: 54.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for trainer_tools-0.1.1.tar.gz
Algorithm Hash digest
SHA256 a1c7e5cef7beafadde2352a600fd76352fa17184342d8604a6817613b378fce4
MD5 4a83048815f3da39d4d4a919782469be
BLAKE2b-256 355d5b413d02ad85cfad0ad2d1a84724abeb94d8ff98fd224325b3b49d5a2bb9

See more details on using hashes here.

File details

Details for the file trainer_tools-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: trainer_tools-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 22.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for trainer_tools-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d4412e1604c121329b9799eac67648b6eabe5c03e0ae4438ca0ee3b2bb768f7b
MD5 01f7898e6798f797a2ef8bc191e0b188
BLAKE2b-256 c9bbc8cd918b2db73c63c0370df42f75c41759d0ff1c998c1bdfebccd8282c80

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page