Neural Networks library for image classification task.
Project description
Table of content
- Quick start
- Warnings
- Encoders
- Decoders
- Pretrained
- Pretrained old configs fixes
- Datasets
- Losses
- Metrics
- Optimizers
- Schedulers
- Examples
Quick start
1. Straight install.
1.1 Install torch with cuda.
pip install -U torch --extra-index-url https://download.pytorch.org/whl/cu113
1.2 Install opennn_pytorch.
pip install -U opennn_pytorch
2. Dockerfile.
cd docker/
docker build -t opennn:latest .
Warnings
- Cuda is only supported for nvidia graphics cards.
- Alexnet decoder doesn't support bce losses family.
- Sometimes combination of dataset/encoder/decoder/loss can give bad results, try to combine others.
- Custom cross-entropy support only mode when preds have (n, c) shape and labels have (n) shape.
- Not all options in transform.yaml and config.yaml are required.
- Mean and std in datasets section must be used in transform.yaml, for example [mean=[0.2859], std=[0.3530]] -> normalize: [[0.2859], [0.3530]]
Encoders
- LeNet [paper] [code]
- AlexNet [paper] [code]
- GoogleNet [paper] [code]
- ResNet18 [paper] [code]
- ResNet34 [paper] [code]
- ResNet50 [paper] [code]
- ResNet101 [paper] [code]
- ResNet152 [paper] [code]
- MobileNet [paper] [code]
- VGG-11 [paper] [code]
- VGG-16 [paper] [code]
- VGG-19 [paper] [code]
Decoders
Pretrained
LeNet
Encoder | Decoder | Dataset | Weights | Configs | Logs |
---|---|---|---|---|---|
LeNet | LeNet | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
LeNet | LeNet | FASHION-MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
LeNet | Linear | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
LeNet | Linear | FASHION-MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
LeNet | AlexNet | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
LeNet | AlexNet | FASHION-MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
AlexNet
Encoder | Decoder | Dataset | Weights | Configs | Logs |
---|---|---|---|---|---|
AlexNet | LeNet | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
AlexNet | LeNet | FASHION-MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
AlexNet | Linear | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
AlexNet | Linear | FASHION-MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
AlexNet | AlexNet | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
AlexNet | AlexNet | FASHION-MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
GoogleNet
ResNet
Encoder | Decoder | Dataset | Weights | Configs | Logs |
---|---|---|---|---|---|
ResNet18 | Linear | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
ResNet34 | Linear | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
ResNet50 | Linear | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
ResNet101 | Linear | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
ResNet152 | Linear | MNIST | BEST, PLAN | CONFIG, TRANSFORM | TRAINVAL |
MobileNet
VGG
Pretrained old configs fixes
- Config must include test_part value, (train_part + valid_part + test_part) value can be not equal to 1.0.
Datasets
- MNIST [files] [code] [classes=10] [mean=[0.1307], std=[0.3801]]
- FASHION-MNIST [files] [code] [classes=10] [mean=[0.2859], std=[0.3530]]
- CIFAR-10 [files] [code] [classes=10] [mean=[0.491, 0.482, 0.446], std=[0.247, 0.243, 0.261]]
- CIFAR-100 [files] [code] [classes=100] [mean=[0.5071, 0.4867, 0.4408], std=[0.2675, 0.2565, 0.2761]]
- GTSRB [files] [code] [classes=43] [mean=unknown, std=unknown]
- CUSTOM [docs] [code] [example] [classes=nc] [mean=unknown, std=unknown]
Losses
- Cross-Entropy [pytorch, custom] [docs] [code]
- Binary-Cross-Entropy [pytorch, custom] [docs] [code]
- Binary-Cross-Entropy-With-Logits [pytorch, custom] [docs] [code]
- Mean-Squared-Error [pytorch, custom] [docs] [code]
- Mean-Absolute-Error [pytorch, custom] [docs] [code]
Metrics
- Accuracy [custom] [docs] [code]
- Precision [sklearn] [docs] [code]
- Recall [sklearn] [docs] [code]
- F1 [sklearn] [docs] [code]
Optimizers
- Adam [pytorch] [docs] [code]
- AdamW [pytorch] [docs] [code]
- Adamax [pytorch] [docs] [code]
- RAdam [pytorch] [docs] [code]
- NAdam [pytorch] [docs] [code]
Schedulers
- StepLR [pytorch] [docs] [code]
- MultiStepLR [pytorch] [docs] [code]
- PolynomialLRDecay [custom] [docs] [code]
Examples
- Run from yaml config.
import opennn_pytorch
config = 'path to yaml config' # check configs folder
opennn_pytorch.run(config)
- Get encoder and decoder.
import opennn_pytorch
encoder_name = 'resnet18'
decoder_name = 'alexnet'
decoder_mode = 'decoder'
input_channels = 1
number_classes = 10
device = 'cuda'
encoder = opennn_pytorch.encoders.get_encoder(encoder_name, input_channels).to(device)
model = opennn_pytorch.decoders.get_decoder(decoder_name, encoder, number_classes, decoder_mode, device).to(device)
3.1 Get dataset.
import opennn_pytorch
from torchvision import transforms
transform_config = 'path to transform yaml config'
dataset_name = 'mnist'
datafiles = None
train_part = 0.7
valid_part = 0.2
test_part = 0.05
transform_lst = opennn_pytorch.transforms_lst(transform_config)
transform = transforms.Compose(transform_lst)
train_data, valid_data, test_data = opennn_pytorch.datasets.get_dataset(dataset_name, train_part, valid_part, test_part, transform, datafiles)
3.2 Get custom dataset.
import opennn_pytorch
from torchvision import transforms
transform_config = 'path to transform yaml config'
dataset_name = 'custom'
images = 'path to folder with images'
annotation = 'path to annotation yaml file with image: class structure'
datafiles = (images, annotation)
train_part = 0.7
valid_part = 0.2
test_part = 0.05
transform_lst = opennn_pytorch.transforms_lst(transform_config)
transform = transforms.Compose(transform_lst)
train_data, valid_data, test_data = opennn_pytorch.datasets.get_dataset(dataset_name, train_part, valid_part, test_part, transform, datafiles)
- Get optimizer.
import opennn_pytorch
optim_name = 'adam'
lr = 1e-3
betas = (0.9, 0.999)
eps = 1e-8
weight_decay = 1e-6
optimizer = opennn_pytorch.optimizers.get_optimizer(optim_name, model, lr=lr, betas=betas, eps=opt_eps, weight_decay=weight_decay)
- Get scheduler.
import opennn_pytorch
scheduler_name = 'steplr'
step = 10
gamma = 0.5
scheduler = opennn_pytorch.schedulers.get_scheduler(sched, optimizer, step=step, gamma=gamma)
- Get loss function.
import opennn_pytorch
loss_fn = 'custom_mse'
loss_fn, one_hot = opennn_pytorch.losses.get_loss(loss_fn)
- Get metrics functions.
import opennn_pytorch
metrics_names = ['accuracy', 'precision', 'recall', 'f1_score']
number_classes = 10
metrics_fn = opennn_pytorch.metrics.get_metrics(metrics_names, nc=number_classes)
- Train/Test.
import opennn_pytorch
import random
algorithm = 'train'
batch_size = 16
class_names = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
number_classes = 10
save_every = 5
epochs = 20
train_dataloader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, shuffle=True)
valid_dataloader = torch.utils.data.DataLoader(valid_data, batch_size=batch_size, shuffle=False)
test_dataloader = torch.utils.data.DataLoader(test_data, batch_size=1, shuffle=False)
if algorithm == 'train':
opennn_pytorch.algo.train(train_dataloader, valid_dataloader, model, optimizer, scheduler, loss_fn, metrics_fn, epochs, checkpoints, logs, device, save_every, one_hot, number_classes)
elif algorithm == 'test':
test_logs = opennn_pytorch.algo.test(test_dataloader, model, loss_fn, metrics_fn, logs, device, one_hot, number_classes)
if pred:
indices = random.sample(range(0, len(test_data)), 10)
os.mkdir(test_logs + '/prediction', 0o777)
for i in range(10):
prediction(test_data, model, device, {i: names[i] for i in range(number_classes)}, test_logs + f'/prediction/{i}', indices[i])
Citation
Project citation.
License
Project is distributed under MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
opennn_pytorch-1.1.0.tar.gz
(31.2 kB
view details)
Built Distribution
File details
Details for the file opennn_pytorch-1.1.0.tar.gz
.
File metadata
- Download URL: opennn_pytorch-1.1.0.tar.gz
- Upload date:
- Size: 31.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.0 CPython/3.8.11
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9104623b5da3b14d053dac5ac86e3bf156ac92e4f40d2a57db3caa2e04961538 |
|
MD5 | ea7970683b45342a5eecdbc699bce7a8 |
|
BLAKE2b-256 | c57036855bf12a631f6ab2a54f0d4be536ffbd635d9e7988b27289bc0c2d8eef |
File details
Details for the file opennn_pytorch-1.1.0-py2.py3-none-any.whl
.
File metadata
- Download URL: opennn_pytorch-1.1.0-py2.py3-none-any.whl
- Upload date:
- Size: 41.6 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.0 CPython/3.8.11
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3925e120c33f31e8012a73726f4ec49e040a0c89e87ba09f180408e87a718583 |
|
MD5 | 87019d2dfb832275697162c2cc1065f0 |
|
BLAKE2b-256 | dbba575254bb649766d74a2600aaa464c57a116de815e16b1187c1eaf2c6ead0 |