Skip to main content

Just Assemble IT! - A LEGO-style & PyTorch-based Deep Learning Library

Project description

jai - Just Assemble It!

Author: Jia Geng

Email: jxg570@miami.edu | gjia0214@hotmail.com

PyPI: https://pypi.org/project/jai/

Introduction

Deep learning is fun. What not fun is the pipeline digging and rigging. Why can't we just enjoy the process of exploring all kinds of SOTA techniques with interesting dataset instead of wasting our coffee on boring things like implementing the sockets for it.

jai is a LEGO-style PyTorch-based Deep Learning Library. The main idea behind jai is to reduce the amount of time spent on building all sort of pipelines or sockets to plugin those fancy deep learning tricks. This project also tend to create some handy toolkits for Kaggle.

Dev. Plan

Implement anything popped up in my head when I got time and coffee...

Installation

pip install jai

The library is still in early stage. A lot more of functions and tools will be implemented and tested soon and in the future.

Library Walk Through

jai.dataset.py provides abstract dataset classes that inherit the PyTorch DataSet class. The difference is that jai.dataset supports data augmentation and processing.

jai.improc.py provides some handy image processing functions, which can be injected to the jai dataset as image preprocessing functions or to the augmentation classes as data augmentation functions.

jai.augments.py provides the augmentation classes that can be attached to the jai dataset classes. It (will) also provide implementations of some advanced augmentation techniques.

jai.trainer.py provide a trainer class that supports classic PyTorch style deep learning training pipeline. It has some specific requirements on the implementation of the dataset object.

jai.logger.py provides the result/performance logger classes. These logger classes can be attached to the trainer during the training stage and able to export, report all kinds of model performance related metrics.

jai.arch.py (will) provide handy way to modify popular and vanilla deep learning architectures to make the architecture compatible with the jai framework.

jai.kaggler (will) provide data pipelining solutions, toolbox for general or selected Kaggle project development. It will also collect some useful tools/models from the kagglers.

jai.sota (will) provide some state-of-the-art techniques such as optmizers, schedulers, etc. that is compatible with the jai framework.

Things Need to be Prepared before Use (not fully tested)

  1. Learn how to use partial() as it is crucial for this library.

    from functiontools import partial
    
  2. Prepare/Implement you architecture and loss function. Some examples can be found in jai.kaggler.from_kagglers. Both need to be in torch.nn.Module style. If you only need to use the vanilla architectures, just grab a model from the torchvision.models and loss function from torch. E.g.

    import torchvision.models as model
    import torch.nn as nn
    
    arch = model.resnet18()
    loss = nn.CrossEntropyLoss()
    
  3. Implement the dataset class. Some examples can be found in jai.kaggler.kaggle_data. The key thing is to inherit the jai.dataset.JaiDataset class and include the following code at the end of the __getitem()__ method. The JaiDataset constructor can receive two args for preprocessing and augmentation: tsfms= augments=

    # do whatever necessary to get the input, and ground truth with input idx
    # img_id is not necessary. but if you have it, the logger will be able to collect false classification during
    evaluation
    # img and t need to be converted to Tensor in correct dimensions
    # img dim: CxHxW; y dim: Bx1 (single output) or BxK (multiple output if you need to predict different things) 
    
    (whatever you implemented) ...
    -> img_id, img, y  
    
    # prepocess the image 
    img = self.prepro(img)
    
    # augment the image during training time
    img = self.augment(img)
    
    # The output need to be dictionary as follow
    # id can be omit
    return {"id": img_id, "x": img, "y": y}
    
  4. Prepare preprocessing and augmentation. For preprocessing, just use a list to wrap the functions from jai.improc. The list must contain the to_tensor method at the end. The wrapped elements must be functions not the function calls. Most functions only takes an image input. For some functions that takes hyper-parameters, you need to use paritial(func) to specify the hyper-parameters.

    E.g.

    from jai.improc import * 
    
    tsfms = [denoise, partial(threshold, low=15, adaptive_ksize=(13, 13), C=-10), centralize_object, 
             rescale, standardize, to_tensor]
    

    For augmentation, create a jai.augments.FuncAugmentator object. The FuncAugmentator takes a starting probability and a max probability for applying augmentation during training time. It also takes an augmentation function that process the image. The func= also only takes in function instead of function call. And the function should only have one required arg, i.e., the input data. Use partial() to wrap the hyper-parameter. jai.augments.AugF (will) provide some advanced augmentation. E.g:

    from jai.augments import * 
    
    gridmask = FuncAugmentator(p_start=0.1, p_end=1, func=partial(AugF.grid_mask, d1=96, d2=244))
    
  5. Prepare the optimizer and scheduler. The easiest way is just to grab the optimizer and scheduler from PyTorch. You can also implement your own. But make sure use the PyTorch style. Also, prepare them in partial function if you need to specify the hyperparameters!

    E.g.

    from torch.optim.adamw import AdamW
    from torch.optim.lr_scheduler import CosineAnnealingLR
    
    optimizer = partial(AdamW, betas=(0.9, 0.999))
    scheduler = partial(CosineAnnealingLR, T_max=100)
    
  6. Prepare the jai.dataset.Evaluator. This is for the purpose of generating logs using specified encoding and score system.

    • names= is for hashing the predictors
    • n_classes= indicates how many possible classes for each predictor.
    • criteria= indicates which criteria will be used for caculating scores (precision, recall, accurarcy)
    • avg= indicates how to average the scores across different classes (micro, macro)
    • weights= is used when you have multiple output node in your models and how do you want to combine the scores for each predictor.

    E.g. if your model is trying to predict the type of dog and whether the dog is walked by a human in an image.

    from jai.dataset import *
    
    # say your training data have 10 types of dog and binary output for whether human in it or not
    # you want to use macro precision and more concerned about has_human
    evaluator = Evaluator(names=['dog_type', 'has_human'], n_classes=[10, 2], criteria='precision', avg='macro', weights=[1, 2])
    
  7. Prepare the Logger. You need to prepare a clean directory for receiving log files, a prefix string for identifying your trial, and a Evaluator resume=False will tell the lib that you are training a new model so it will create a batch of new log files. resume=True will tell the lib that you are continue training your model, it will write on the old log files keep='one_best and it will only export the best model and overwrite. keep='all_best' will export all encountered best models.

    E.g.

    from jai.logger import *
    
    # keep all best models along the training process
    logger = BasicLogger(log_dst, prefix, evaluator,resume=False, keep='all_best')
    

Just Assemble It!

Now we have all we need. Next is just assemble it!

We have

# model
model = model.resnet18()
loss = nn.CrossEntropyLoss()

# dataset
tsfms = [denoise, partial(threshold, low=15, adaptive_ksize=(13, 13), C=-10), centralize_object, rescale, standardize, to_tensor]
gridmask = FuncAugmentator(p_start=0.1, p_end=1, func=partial(AugF.grid_mask, d1=96, d2=244))
dataset = YourJaiDataset(*args, tsfms=tsfms, augments=gridmask)

# optimzer
optimizer = partial(AdamW, betas=(0.9, 0.999))
scheduler = partial(CosineAnnealingLR, T_max=100)

# logger and predictor encoder
class_dict = DataClassDict(names=['dog_type'], n_classes=[10])
logger = BasicLogger(log_dst, prefix, evaluator, resume=False, keep='all_best')

To Train Your Model:

from jai.trainer import *

train_set, eval_set = dataset.split(train_ratio=0.8, seed=2020)  # split to 0.8 : 0.2 with seed 2020
train_loader = DataLoader(train_set, batch_size=32, shuffle=True)
eval_loader = DataLoader(eval_Set, batch_size=32, shuffle=False)
trainer = BasicTrainer(model, optimizer, scheduler)

trainer.initialize()

trainer.train(train_loader, eval_loader, epochs=50, loss_func=loss, logger=logger)

Now you are:

  • training your deep learning model with AdamW and CosineAnnealing Scheduler
  • using image preprocessing and the GridMask augmentation
  • searching for the best model based on the evaluation performance
  • recording and exporting the training logs such as
    • batch loss
    • epoch loss and model train/eval accuracy
    • confusion matrix of your best model(s)
    • export model parameters and optimizer & scheduler state when find better model
    • export the best model's failed detection during eval phase

After the training is done. You can do: logger.plot('loss') to check your training progress.

Just Re-Assemble It!

Often you might want to continue the training process. You can do it by

# read all the state dict (find it under your log_dst/model)
model_state = torch.load(model_path)
optimizer_state = torch.load(optimizer_path)
scheduler_state = torch.load(scheduler_path)

# load the check points
trainer.load_model_state(model_state)
trainer.initialize(optimizer_state, scheduler_state)

# prepare a logger with same log_dst but set the resume to True
logger = BasicLogger(log_dst, prefix, evaluator, resume=True, keep='all_best')

# train your model
trainer.train(train_loader, eval_loader, epochs=50, loss_func=loss, logger=logger)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

jai-0.0.9.8.tar.gz (28.6 kB view details)

Uploaded Source

Built Distribution

jai-0.0.9.8-py3-none-any.whl (29.5 kB view details)

Uploaded Python 3

File details

Details for the file jai-0.0.9.8.tar.gz.

File metadata

  • Download URL: jai-0.0.9.8.tar.gz
  • Upload date:
  • Size: 28.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.4.2 requests/2.21.0 setuptools/45.2.0 requests-toolbelt/0.8.0 tqdm/4.30.0 CPython/3.7.5

File hashes

Hashes for jai-0.0.9.8.tar.gz
Algorithm Hash digest
SHA256 161fddbf89906658052fc2202f888d8c24b16502c15c487d5a844b1ecd76aedb
MD5 c638f6c096949cffe1588c3a8383c20d
BLAKE2b-256 9108a40e79ce785c76eea5e05a53c3bb832a8f89754c89a52d9f9ddfb6ae1e95

See more details on using hashes here.

File details

Details for the file jai-0.0.9.8-py3-none-any.whl.

File metadata

  • Download URL: jai-0.0.9.8-py3-none-any.whl
  • Upload date:
  • Size: 29.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.4.2 requests/2.21.0 setuptools/45.2.0 requests-toolbelt/0.8.0 tqdm/4.30.0 CPython/3.7.5

File hashes

Hashes for jai-0.0.9.8-py3-none-any.whl
Algorithm Hash digest
SHA256 05982207826f86a7f26139695c99deeb3d5511c6482c1e6d933d6eafda5f679f
MD5 58744e0b5c52a46e34d12de2fb6ed5b1
BLAKE2b-256 1de60e9e63904224da80686d2083846f65a90bab44b03065c3f3d5170e9a95ff

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page