Neural Networks library for image classification task.
Project description
Table of content
- Quick start
- Warnings
- Encoders
- Decoders
- Pretrained
- Datasets
- Losses
- Metrics
- Optimizers
- Schedulers
- Examples
Quick start
1. Straight install.
1.1 Install torch with cuda.
pip install -U torch --extra-index-url https://download.pytorch.org/whl/cu113
1.2 Install opennn_pytorch.
pip install -U opennn_pytorch
2. Dockerfile.
cd docker/
docker build -t opennn:latest .
Warnings
- Cuda is only supported for nvidia graphics cards.
- Alexnet decoder doesn't support bce losses family.
- Sometimes combination of dataset/encoder/decoder/loss can give bad results, try to combine others.
- Custom cross-entropy support only mode when preds have (n, c) shape and labels have (n) shape.
- Not all options in transform.yaml and config.yaml are required.
- Mean and std in datasets section must be used in transform.yaml, for example [mean=[0.2859], std=[0.3530]] -> normalize: [[0.2859], [0.3530]]
Encoders
- LeNet [paper] [code]
- AlexNet [paper] [code]
- GoogleNet [paper] [code]
- ResNet18 [paper] [code]
- ResNet34 [paper] [code]
- ResNet50 [paper] [code]
- ResNet101 [paper] [code]
- ResNet152 [paper] [code]
- MobileNet [paper] [code]
- VGG-11 [paper] [code]
- VGG-16 [paper] [code]
- VGG-19 [paper] [code]
Decoders
Pretrained
LeNet
AlexNet
GoogleNet
ResNet
MobileNet
VGG
Datasets
- MNIST [files] [code] [classes=10] [mean=[0.1307], std=[0.3801]]
- FASHION-MNIST [files] [code] [classes=10] [mean=[0.2859], std=[0.3530]]
- CIFAR-10 [files] [code] [classes=10] [mean=[0.491, 0.482, 0.446], std=[0.247, 0.243, 0.261]]
- CIFAR-100 [files] [code] [classes=100] [mean=[0.5071, 0.4867, 0.4408], std=[0.2675, 0.2565, 0.2761]]
- GTSRB [files] [code] [classes=43] [mean=unknown, std=unknown]
- CUSTOM [docs] [code] [example] [classes=nc] [mean=unknown, std=unknown]
Losses
- Cross-Entropy [pytorch, custom] [docs] [code]
- Binary-Cross-Entropy [pytorch, custom] [docs] [code]
- Binary-Cross-Entropy-With-Logits [pytorch, custom] [docs] [code]
- Mean-Squared-Error [pytorch, custom] [docs] [code]
- Mean-Absolute-Error [pytorch, custom] [docs] [code]
Metrics
- Accuracy [custom] [docs] [code]
- Precision [sklearn] [docs] [code]
- Recall [sklearn] [docs] [code]
- F1 [sklearn] [docs] [code]
Optimizers
- Adam [pytorch] [docs] [code]
- AdamW [pytorch] [docs] [code]
- Adamax [pytorch] [docs] [code]
- RAdam [pytorch] [docs] [code]
- NAdam [pytorch] [docs] [code]
Schedulers
Examples
- Run from yaml configs.
import opennn_pytorch
config = 'path to yaml config' # check configs folder
opennn_pytorch.run(config)
- Get encoder and decoder.
import opennn_pytorch
encoder_name = 'resnet18'
decoder_name = 'alexnet'
decoder_mode = 'decoder'
input_channels = 1
number_classes = 10
device = 'cuda'
encoder = opennn_pytorch.encoders.get_encoder(encoder_name, input_channels).to(device)
model = opennn_pytorch.decoders.get_decoder(decoder_name, encoder, number_classes, decoder_mode, device).to(device)
3.1 Get dataset.
import opennn_pytorch
from torchvision import transforms
transform_config = 'path to transform yaml config'
dataset_name = 'mnist'
datafiles = None
train_part = 0.7
valid_part = 0.2
transform_lst = opennn_pytorch.transforms_lst(transform_config)
transform = transforms.Compose(transform_lst)
train_data, valid_data, test_data = opennn_pytorch.datasets.get_dataset(dataset_name, train_part, valid_part, transform, datafiles)
3.2 Get custom dataset.
import opennn_pytorch
from torchvision import transforms
transform_config = 'path to transform yaml config'
dataset_name = 'custom'
images = 'path to folder with images'
annotation = 'path to annotation yaml file with image: class structure'
datafiles = (images, annotation)
train_part = 0.7
valid_part = 0.2
transform_lst = opennn_pytorch.transforms_lst(transform_config)
transform = transforms.Compose(transform_lst)
train_data, valid_data, test_data = opennn_pytorch.datasets.get_dataset(dataset_name, train_part, valid_part, transform, datafiles)
- Get optimizer.
import opennn_pytorch
optim_name = 'adam'
lr = 1e-3
betas = (0.9, 0.999)
eps = 1e-8
weight_decay = 1e-6
optimizer = opennn_pytorch.optimizers.get_optimizer(optim_name, model, lr=lr, betas=betas, eps=opt_eps, weight_decay=weight_decay)
- Get scheduler.
import opennn_pytorch
scheduler_name = 'steplr'
step = 10
gamma = 0.5
scheduler = opennn_pytorch.schedulers.get_scheduler(sched, optimizer, step=step, gamma=gamma, milestones=None)
- Get loss function.
import opennn_pytorch
loss_fn = 'custom_mse'
loss_fn, one_hot = opennn_pytorch.losses.get_loss(loss_fn)
- Get metrics functions.
import opennn_pytorch
metrics_names = ['accuracy', 'precision', 'recall', 'f1_score']
number_classes = 10
metrics_fn = opennn_pytorch.metrics.get_metrics(metrics_names, nc=number_classes)
- Train/Test.
import opennn_pytorch
algorithm = 'train'
batch_size = 16
class_names = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
number_classes = 10
save_every = 5
epochs = 20
train_dataloader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, shuffle=True)
valid_dataloader = torch.utils.data.DataLoader(valid_data, batch_size=batch_size, shuffle=False)
test_dataloader = torch.utils.data.DataLoader(test_data, batch_size=1, shuffle=False)
if algorithm == 'train':
opennn_pytorch.algo.train(train_dataloader, valid_dataloader, model, optimizer, scheduler, loss_fn, metrics_fn, epochs, checkpoints, logs, device, save_every, one_hot, number_classes)
elif algorithm == 'test':
test_logs = opennn_pytorch.algo.test(test_dataloader, model, loss_fn, metrics_fn, logs, device, one_hot, number_classes)
if viz:
os.mkdir(test_logs + '/vizualize', 0o777)
for i in range(10):
os.mkdir(test_logs + f'/vizualize/{i}', 0o777)
opennn_pytorch.algo.vizualize(valid_data, model, device, {i: class_names[i] for i in range(number_classes)}, test_logs + f'/vizualize/{i}')
Citation
Project citation.
License
Project is distributed under MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
opennn_pytorch-1.0.5.tar.gz
(27.2 kB
view details)
Built Distribution
File details
Details for the file opennn_pytorch-1.0.5.tar.gz
.
File metadata
- Download URL: opennn_pytorch-1.0.5.tar.gz
- Upload date:
- Size: 27.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.0 CPython/3.8.11
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1577c4c443aac2b9ff4c2534d2453ca9c0a6359c17b2b98cfd67bc6d352841b3 |
|
MD5 | 366b2539ff5b42f78ccd729895b54c48 |
|
BLAKE2b-256 | 4284dc3cda095b16ed867c00310779707e2a22d0e5bbfcb37a46fb798bfe23e8 |
File details
Details for the file opennn_pytorch-1.0.5-py2.py3-none-any.whl
.
File metadata
- Download URL: opennn_pytorch-1.0.5-py2.py3-none-any.whl
- Upload date:
- Size: 37.5 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.0 CPython/3.8.11
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 08dd7c4f5ed893b6a9a1f6fe9b3d0626526481dbc31199391b38a5fb9b865e91 |
|
MD5 | d8a19cbcb1f6b670b88524d08649a22a |
|
BLAKE2b-256 | ee1f95df1ac34395f89a5a9faa8ceef584a0886d7ae7da0f70ab9f8fe1e2bac0 |