Skip to main content

A machine learning library agnostic framework for model training

Project description

If you are eager to dive in to training scripts that use MLpug, checkout the examples directory!

MLpug

MLpug is a machine learning library agnostic framework for model training.

A lot of the functionality you need to train your machine learning model is independent of the machine learning library you're using, e.g. PyTorch and Tensorflow. For instance,

  • checkpoint management,
  • evaluation of validation set loss and other custom metrics,
  • progress logging,
  • progress visualization using Tensorboard,
  • the use of gradient accumulation to train with large batch sizes using limited GPU memory, etc..

You need such functionality no matter what machine learning framework you are using.

MLpug provides a single framework with a unified API for all such training functionality, independent of the machine learning library you are using. This also implies that when you switch library you can reuse your training code with no, or minimal, changes.

Supported backends

Currently, MLpug supports the following deep learning/machine learning library 'backends':

  • PyTorch
  • PyTorch/XLA (Training with Pytorch on TPUs)
  • Tensorflow (in development, some features not available yet)

MLpug focus

Although MLpug should be able to deal with any training job, its functionality is mostly focussed on dealing with
training large models on large datasets, using limited hardware (GPU or TPU) resources and memory.

Almost at version 0.1!

MLpug is still in development. If you are having trouble using MLpug for your use case, or when you have found a bug, please file an issue.

Contents

Installing MLpug

Hello World (PT | XLA | TF)


The following sections are documentation ToDo's, but provide insight in to MLpug's features:
The logs object

Callbacks and the training life cycle

Progress Logging

Model components vs Training model

Distributed training

Checkpoint management
      Using the CheckpointManager
      Using training checkpoints
      Using model checkpoints
      Checkpointing on error or interrupt

MLpug metric evaluators
      Auxiliary batch training results
      Calculating custom metrics
      Conditional computation of metrics

Batch chunking, dealing with GPU memory limits
      Gradient Accumulation
      Chunked Metric Computation

Using Tensorboard
      Tensorboard made easy with AutoTensorboard
      More fine grained control

Learning Rate Scheduling

Multi GPU training

Mixed Precision Training

CUDA Memory tools

Using multiple optimizers

Installing MLpug

Please ensure that you are using Python3.7+.

Install as follows:

pip install mlpug

Usage with PyTorch

When you want to use MLpug with PyTorch, you will need to install it:

pip install torch torchvision

Usage with Tensorflow

When you want to use MLpug with Tensorflow, you will need to install it:

pip install tensorflow

Hello World!

This is the Hello World of training with MLpug. You will see that the usage of MLpug with Pytorch, Pytorch/XLA and Tensorflow is very similar.

For details please see :

You can download and run these examples (for XLA you need to use a TPU on Google Cloud, or use Google Colab).

When reading through the explanation below it might be that you still have a lot of questions about the why and how of training with MLpug, however I will expand the MLpug documentation soon, so you will get better insight.

'Hello World' with PyTorch

To use MLpug with Pytorch

import mlpug.pytorch as mlp

Before we can start training we need an iterable dataset that can provide our training batches.

training_dataset = torch.utils.data.DataLoader(training_data,
                                               batch_size=batch_size,
                                               shuffle=False,
                                               num_workers=3)

... and a model we want to train

classifier = torch.nn.Sequential(
    torch.nn.Flatten(),
    torch.nn.Linear(784, 128),
    torch.nn.ReLU(),
    torch.nn.Linear(128, 10))

MLpug needs a way to evaluate the loss of the model. One way to do that is to define a TrainModel that outputs the loss

class TrainModel(torch.nn.Module):
    def __init__(self, classifier):
        super(TrainModel, self).__init__()

        self.classifier = classifier
        self.loss_func = torch.nn.CrossEntropyLoss()

    def forward(self, batch_data, evaluate_settings, inference_mode=None):
        images, true_labels = batch_data

        logits = self.classifier(images)
        return self.loss_func(logits, true_labels)

train_model = TrainModel(classifier)

To train the model we will also need an optimizer

optimizer = torch.optim.Adam(classifier.parameters(), eps=1e-7)

To now use MLpug to start training, we need to create a Trainer which will be used by a TrainingManager.

trainer = mlp.trainers.DefaultTrainer(optimizers=optimizer, model_components=classifier)

MLpug uses a callback system allowing you to customize and extend the training functionality. The list of callback instances you provide the TrainingManager will be called using hooks at different stages of the training process.

# At minimum you want to log the loss in the training progress
# By default the batch loss and the moving average of the loss are calculated and logged
loss_evaluator = mlp.evaluation.MetricEvaluator(trainer=trainer)
callbacks = [
    mlp.callbacks.TrainingMetricsLogger(metric_evaluator=loss_evaluator),
    # Calculate validation loss only once per epoch over the whole dataset
    mlp.callbacks.TestMetricsLogger(validation_dataset,
                                    'validation',
                                    metric_evaluator=loss_evaluator,
                                    batch_level=False),
    mlp.callbacks.LogProgress(log_period=progress_log_period, set_names=['training', 'validation']),
]

The TrainingMetricsLogger and the TestMetricsLogger callback instances log training and validation set loss values in a logs object that is passed through all callbacks during training. The LogProgress callback instance logs the metric values stored in the received logs object.

We can now instantiate the TrainingManager and pass it the trainer.

manager = mlp.trainers.TrainingManager(trainer,
                                       training_dataset,
                                       num_epochs=num_epochs,
                                       callbacks=callbacks)

Before we can start training we still have to provide the train_model to the trainer.

trainer.set_training_model(train_model)

The final step is to actually start training:

manager.start_training()

Running pytorch/hello_world.py finishes like this:

###############################################################################
Epoch 9/9	READY - Duration 0:00:08
Moving average:
training       : loss          0.238.

Computed over dataset:
validation     : loss          0.346.



INFO    : TrainingManager::_train : Training completed. All good! ❤️

Using the classifier ...
real label = 9, predicted label = 9

'Hello World' with PyTorch/XLA

The Hello World example with PyTorch/XLA, is largely the same as with PyTorch. There are only two small differences.

To use MLpug with Pytorch/XLA, load the correct backend

import mlpug.pytorch.xla as mlp

Load your model on a TPU core:

import torch_xla.core.xla_model as xm

...

device = xm.xla_device()

train_model = TrainModel(classifier, device)
classifier.to(device)

'Hello World' with Tensorflow

Below we will focus only on the minor differences between using MLpug with PyTorch and Tensorflow.

To use MLpug with Tensorflow

import mlpug.tensorflow as mlp

The only real difference is that, for Tensorflow, you can specify if the trainer needs to run in eager mode or not. If not, you need to specify the input batch_data_signature.

trainer = mlp.trainers.DefaultTrainer(optimizers=optimizer,
                                      model_components=classifier,
                                      eager_mode=True)
trainer = mlp.trainers.DefaultTrainer(optimizers=optimizer,
                                      model_components=classifier,
                                      batch_data_signature=(tf.TensorSpec(shape=(None, 28, 28), dtype=tf.float64),
                                                            tf.TensorSpec(shape=(None,), dtype=tf.uint8),))

When you run tensorflow/hello_world.py and tensorflow/hello_world_not_eager.py you will see that when not running in eager mode, training is much faster.

Running tensorflow/hello_world.py finishes like this:

###############################################################################
Epoch 9/9	READY - Duration 0:00:15
Moving average:
training       : loss          0.229.

Computed over dataset:
validation     : loss          0.370.



INFO    : TrainingManager::_train : Training completed. All good! ❤️

Using the classifier ...
real label = 9, predicted label = 9

Running tensorflow/hello_world_not_eager.py finishes like this:

###############################################################################
Epoch 9/9	READY - Duration 0:00:06
Moving average:
training       : loss          0.229.

Computed over dataset:
validation     : loss          0.370.



INFO    : TrainingManager::_train : Training completed. All good! ❤️

Using the classifier ...
real label = 9, predicted label = 9

Note the difference in epoch duration!

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mlpug-0.0.43.tar.gz (86.2 kB view hashes)

Uploaded Source

Built Distribution

mlpug-0.0.43-py3-none-any.whl (171.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page