Skip to main content

No project description provided

Project description

EasyPL - set of wrappers and tools based on PyTorch Lightning to quickly start learning Pytorch models.

This library is a template project for faster deployment of machine learning model training projects based on PyTorch Lightning. If PyTorch Lightning makes learning models easy, then EasyPL makes it super easy.

Quickstart

You can install this library using pip:

pip install easyplib

Note: Sorry for the mismatch between the library name in the pypi index and the documentation. The pypi project name normalization algorithms does not allow you to specify an easypl project name.

Also you can install library manually:

git clone https://github.com/tam2511/EasyPL.git
cd EasyPL
python setup.py install

You can find a description of all functions and API in the documentation.

Examples

You can find all examples on rtd with full training pipelines.

For the library to work correctly, you need to wrap your optimizer and lr scheduler in the appropriate classes, for example:

from easypl.optimizers import WrapperOptimizer
from easypl.lr_schedulers import WrapperScheduler

optimizer = WrapperOptimizer(optim.Adam, lr=1e-4)
lr_scheduler = WrapperScheduler(optim.lr_scheduler.StepLR, step_size=2, gamma=1e-1, interval='epoch')

When using metrics from the torchmetrics library, you can use the TorchMetric wrapper:

from easypl.metrics import TorchMetric

TorchMetric(F1(num_classes=2, average='none'), class_names=['cat', 'dog'])

There are many callbacks available in the EasyPL library. For example, callbacks for image logging, cutmix and test-time augmentation are defined below.

image_logger = ClassificationImageLogger(
    phase='train',
    max_samples=10,
    class_names=['cat', 'dog'],
    max_log_classes=2,
    dir_path='images',
    save_on_disk=True,
)

# Cutmix callback
cutmix = Cutmix(
    on_batch=True,
    p=1.0,
    domen='classification',
)

# Test time augmentation callback
tta = ClassificationImageTestTimeAugmentation(
    n=2,
    augmentations=[VerticalFlip(p=1.0)],
    phase='val'
)

The final part of the training pipeline is the definition of the Learner class and the standard launch of training through the Trainer from the PyTorch Lightning library.

learner = ClassificatorLearner(
    model=model,
    loss=loss_f,
    optimizer=optimizer,
    lr_scheduler=lr_scheduler,
    train_metrics=train_metrics,
    val_metrics=val_metrics,
    data_keys=['image'],
    target_keys=['target'],
    multilabel=False
)
trainer = Trainer(
    gpus=1,
    callbacks=[image_logger, cutmix, tta],
    max_epochs=3,
    precision=16
)
trainer.fit(learner, train_dataloaders=train_dataloader, val_dataloaders=[val_dataloader])

TODO

  • Learner for image detection task.
  • Learner for regression task.
  • Example learner for GAN training.
  • Callbacks for target/sample analytics.
  • Finish writing detection part of callbacks.
  • Add tests.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

easyplib-0.3.tar.gz (30.2 kB view details)

Uploaded Source

File details

Details for the file easyplib-0.3.tar.gz.

File metadata

  • Download URL: easyplib-0.3.tar.gz
  • Upload date:
  • Size: 30.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.7.13

File hashes

Hashes for easyplib-0.3.tar.gz
Algorithm Hash digest
SHA256 f6aa23821083774350451846053f8c7db07dc3d3ce7921d4b83b73d3da930c22
MD5 3ac5ac7c1fd9b1f574de6b685a951a67
BLAKE2b-256 6612c7f50514d1971ff59093adba5e8229216a832be49848b074ee25caf65bb1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page