A model training library for pytorch
Project description
A model fitting library for PyTorch
Contents
About
Torchbearer is a PyTorch model fitting library designed for use by researchers (or anyone really) working in deep learning or differentiable programming. Specifically, if you occasionally want to perform advanced custom operations but generally don't want to write hundreds of lines of untested code then this is the library for you.
Above are a linear SVM (differentiable program) visualisation from the docs in less than 100 lines of code and a GAN visualisation from the docs both implemented using torchbearer and pytorch.
Key Features
- Model fitting API using calls to run(...) on Trial instances which are saveable, resumable and replayable
- Sophisticated metric API which supports calculation data (e.g. accuracy) flowing to multiple aggregators which can calculate running values (e.g. mean) and values for the epoch (e.g. std, mean, area under curve)
- Default accuracy metric which infers the accuracy to use from the criterion
- Simple callback API with a persistent model state that supports adding to the loss or accessing the metric values
- A host of callbacks included from the start that enable: tensorboard and visdom logging (for metrics, images and data), model checkpointing, weight decay, learning rate schedulers, gradient clipping and more
- Decorator APIs for metrics and callbacks that allow for simple construction
- An example library with a set of demos showing how complex deep learning models (such as GANs and VAEs) and differentiable programs (like SVMs) can be implemented easily with torchbearer
- Fully tested; as researchers we want to trust that our metrics and callbacks work properly, we have therefore tested everything thoroughly for peace of mind
Installation
The easiest way to install torchbearer is with pip:
pip install torchbearer
Alternatively, build from source with:
pip install git+https://github.com/ecs-vlc/torchbearer
Citing Torchbearer
If you find that torchbearer is useful to your research then please consider citing our preprint: Torchbearer: A Model Fitting Library for PyTorch, with the following BibTeX entry:
@article{torchbearer2018,
author = {Ethan Harris and Matthew Painter and Jonathon Hare},
title = {Torchbearer: A Model Fitting Library for PyTorch},
journal = {arXiv preprint arXiv:1809.03363},
year = {2018}
}
Quickstart
- Define your data and model as usual (here we use a simple CNN on Cifar10). Note that we use torchbearers DatasetValidationSplitter here to create a validation set (10% of the data). This is essential to avoid over-fitting to your test data:
BATCH_SIZE = 128
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
dataset = torchvision.datasets.CIFAR10(root='./data/cifar', train=True, download=True,
transform=transforms.Compose([transforms.ToTensor(), normalize]))
splitter = DatasetValidationSplitter(len(dataset), 0.1)
trainset = splitter.get_train_dataset(dataset)
valset = splitter.get_val_dataset(dataset)
traingen = torch.utils.data.DataLoader(trainset, pin_memory=True, batch_size=BATCH_SIZE, shuffle=True, num_workers=10)
valgen = torch.utils.data.DataLoader(valset, pin_memory=True, batch_size=BATCH_SIZE, shuffle=True, num_workers=10)
testset = torchvision.datasets.CIFAR10(root='./data/cifar', train=False, download=True,
transform=transforms.Compose([transforms.ToTensor(), normalize]))
testgen = torch.utils.data.DataLoader(testset, pin_memory=True, batch_size=BATCH_SIZE, shuffle=False, num_workers=10)
class SimpleModel(nn.Module):
def __init__(self):
super(SimpleModel, self).__init__()
self.convs = nn.Sequential(
nn.Conv2d(3, 16, stride=2, kernel_size=3),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.Conv2d(16, 32, stride=2, kernel_size=3),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(32, 64, stride=2, kernel_size=3),
nn.BatchNorm2d(64),
nn.ReLU()
)
self.classifier = nn.Linear(576, 10)
def forward(self, x):
x = self.convs(x)
x = x.view(-1, 576)
return self.classifier(x)
model = SimpleModel()
- Now that we have a model we can train it simply by wrapping it in a torchbearer Trial instance:
optimizer = optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.001)
loss = nn.CrossEntropyLoss()
from torchbearer import Trial
trial = Trial(model, optimizer, criterion=loss, metrics=['acc', 'loss']).to('cuda')
trial = trial.with_generators(train_generator=traingen, val_generator=valgen, test_generator=testgen)
trial.run(epochs=10)
trial.evaluate(data_key=torchbearer.TEST_DATA)
- Running that code gives output using Tqdm and providing running accuracies and losses during the training phase:
0/10(t): 100%|██████████| 352/352 [00:01<00:00, 176.55it/s, running_acc=0.526, running_loss=1.31, acc=0.453, acc_std=0.498, loss=1.53, loss_std=0.25]
0/10(v): 100%|██████████| 40/40 [00:00<00:00, 201.14it/s, val_acc=0.528, val_acc_std=0.499, val_loss=1.32, val_loss_std=0.0874]
.
.
.
9/10(t): 100%|██████████| 352/352 [00:02<00:00, 171.22it/s, running_acc=0.738, running_loss=0.737, acc=0.749, acc_std=0.434, loss=0.723, loss_std=0.0885]
9/10(v): 100%|██████████| 40/40 [00:00<00:00, 188.51it/s, val_acc=0.669, val_acc_std=0.471, val_loss=0.97, val_loss_std=0.173]
0/1(e): 100%|██████████| 79/79 [00:00<00:00, 241.00it/s, test_acc=0.675, test_acc_std=0.468, test_loss=0.952, test_loss_std=0.109]
Documentation
Our documentation containing the API reference, examples and notes can be found at torchbearer.readthedocs.io
Other Libraries
Torchbearer isn't the only library for training PyTorch models. Here are a few others that might better suit your needs (this is by no means a complete list, see the awesome pytorch list or the incredible pytorch for more):
- skorch, model wrapper that enables use with scikit-learn - crossval etc. can be very useful
- PyToune, simple Keras style API
- ignite, advanced model training from the makers of PyTorch, can need a lot of code for advanced functions (e.g. Tensorboard)
- TorchNetTwo (TNT), can be complex to use but well established, somewhat replaced by ignite
- Inferno, training utilities and convenience classes for PyTorch
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for torchbearer-0.2.6-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | a34ec0f8ebc6e884badf2206d9ced678dadd13be1ab7866d9f61974c0bbd9d0b |
|
MD5 | a2136ab35e27e48cafd6656bffbe56bb |
|
BLAKE2b-256 | 9ead545227daf4a1e88885b214955ae1b32ecf7f6be3a7b09846800d349e939d |