Skip to main content

No project description provided

Project description

EasyPL - set of wrappers and tools based on PyTorch Lightning to quickly start learning Pytorch models.

This library is a template project for faster deployment of machine learning model training projects based on PyTorch Lightning. If PyTorch Lightning makes learning models easy, then EasyPL makes it super easy.

Quickstart

You can install this library using pip:

pip install easyplib

Note: Sorry for the mismatch between the library name in the pypi index and the documentation. The pypi project name normalization algorithms does not allow you to specify an easypl project name.

Also you can install library manually:

git clone https://github.com/tam2511/EasyPL.git
cd EasyPL
python setup.py install

You can find a description of all functions and API in the documentation.

Examples

You can find all examples on rtd with full training pipelines.

For the library to work correctly, you need to wrap your optimizer and lr scheduler in the appropriate classes, for example:

from easypl.optimizers import WrapperOptimizer
from easypl.lr_schedulers import WrapperScheduler

optimizer = WrapperOptimizer(optim.Adam, lr=1e-4)
lr_scheduler = WrapperScheduler(optim.lr_scheduler.StepLR, step_size=2, gamma=1e-1, interval='epoch')

When using metrics from the torchmetrics library, you can use the TorchMetric wrapper:

from easypl.metrics import TorchMetric

TorchMetric(F1(num_classes=2, average='none'), class_names=['cat', 'dog'])

There are many callbacks available in the EasyPL library. For example, callbacks for image logging, cutmix and test-time augmentation are defined below.

image_logger = ClassificationImageLogger(
    phase='train',
    max_samples=10,
    class_names=['cat', 'dog'],
    max_log_classes=2,
    dir_path='images',
    save_on_disk=True,
)

# Cutmix callback
cutmix = Cutmix(
    on_batch=True,
    p=1.0,
    domen='classification',
)

# Test time augmentation callback
tta = ClassificationImageTestTimeAugmentation(
    n=2,
    augmentations=[VerticalFlip(p=1.0)],
    phase='val'
)

The final part of the training pipeline is the definition of the Learner class and the standard launch of training through the Trainer from the PyTorch Lightning library.

learner = ClassificatorLearner(
    model=model,
    loss=loss_f,
    optimizer=optimizer,
    lr_scheduler=lr_scheduler,
    train_metrics=train_metrics,
    val_metrics=val_metrics,
    data_keys=['image'],
    target_keys=['target'],
    multilabel=False
)
trainer = Trainer(
    gpus=1,
    callbacks=[image_logger, cutmix, tta],
    max_epochs=3,
    precision=16
)
trainer.fit(learner, train_dataloaders=train_dataloader, val_dataloaders=[val_dataloader])

##TODO

  • Learner for image detection task.
  • Learner for regression task.
  • Example learner for GAN training.
  • Callbacks for target/sample analytics.
  • Finish writing detection part of callbacks.
  • Add tests.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

easyplib-0.2.tar.gz (30.2 kB view details)

Uploaded Source

File details

Details for the file easyplib-0.2.tar.gz.

File metadata

  • Download URL: easyplib-0.2.tar.gz
  • Upload date:
  • Size: 30.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.7.13

File hashes

Hashes for easyplib-0.2.tar.gz
Algorithm Hash digest
SHA256 9f3910df995d3a4141f1fa0a2fed68d5586c1bb1aa9116bd89e7802c618ab0ea
MD5 a41dc76cf385ae96b1ed6ed9beb876b8
BLAKE2b-256 1cfdc2e2ba2790caa4cacf94d993b3651e66a3d8e3b4021d2d59deffb60d13f7

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page