Neural networks training pipeline based on PyTorch. Designed to standardize training process and to increase coding preformance
Project description
Neural Piepline
Neural networks training pipeline based on PyTorch. Designed to standardize training process and to increase coding preformance.
- Core is about 2K lines, covered by tests, that you doesn't need to write again
- Flexible and customizable training process
- Checkpoints management and train process resuming (source and target device independent)
- Metrics processing and visualization by builtin (tensorboard, Matplotlib) or custom monitors
- Training best practices (e.g. learning rate decaying and hard negative mining)
- Metrics logging and comparison (DVC compatible)
Train MNIST example:
This code run MNIST image classification with Tensorboard monitoring. Code based on PyTorch example.
See full example there.
from neural_pipeline.builtin.monitors.tensorboard import TensorboardMonitor
from neural_pipeline import DataProducer, AbstractDataset, TrainConfig, TrainStage,\
ValidationStage, Trainer, FileStructManager
import torch
from torch import nn
from torchvision import datasets, transforms
class Net(nn.Module):
# Network implementation
class MNISTDataset(AbstractDataset):
transforms = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
def __init__(self, data_dir: str, is_train: bool):
self.dataset = datasets.MNIST(data_dir, train=is_train, download=True)
def __len__(self):
return len(self.dataset)
def __getitem__(self, item):
data, target = self.dataset[item]
return {'data': self.transforms(data), 'target': target}
fsm = FileStructManager(base_dir='data', is_continue=False)
model = Net()
train_dataset = DataProducer([MNISTDataset('data/dataset', True)], batch_size=4, num_workers=2)
validation_dataset = DataProducer([MNISTDataset('data/dataset', False)], batch_size=4, num_workers=2)
train_config = TrainConfig([TrainStage(train_dataset), ValidationStage(validation_dataset)], torch.nn.NLLLoss(),
torch.optim.SGD(model.parameters(), lr=1e-4, momentum=0.5))
trainer = Trainer(model, train_config, fsm, torch.device('cuda:0')).set_epoch_num(50)
trainer.monitor_hub.add_monitor(TensorboardMonitor(fsm, is_continue=False))
trainer.train()
Installation:
pip install neural-pipeline
For builtin
module using install:
pip install tensorboardX matplotlib
Install latest version before it's published on PyPi
pip install -U git+https://github.com/toodef/neural-pipeline
Getting started:
Documentation
See the full documentation there
Data flow scheme:
See the examples
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
Close
Hashes for neural_pipeline-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 74a32a7fe0d33efb1ae36fc7a223a93139d9b1bd68ccb998ae3908357a02d8ac |
|
MD5 | 3da235fbbbdbcf079f4a439d0114bbbb |
|
BLAKE2b-256 | f6ae440a1d20745d5de34c3a79605a63ad00669b93ffa0125f01535c9c72134c |