Skip to main content

An easy-to-use tool for training Pytorch deep learning models

Project description

DeepEpochs

Pytorch深度学习模型训练工具。

安装

pip install deepepochs

使用

数据要求

  • 训练集、验证集和测试集是torch.utils.data.Dataloader对象
  • Dataloaer中每个mini-batch数据是一个tuplelist,其中最后一个是标签
    • 如果数据不包含标签,则请将最后一项置为None

指标计算

  • 每个指标是一个函数
    • 它有两个参数,分别为模型的预测结果和标签
    • 返回值为当前mini-batch上的指标值

常规训练流程应用示例

from deepepochs import Trainer, CheckCallback, rename, EpochTask
import torch
from torch import nn
from torch.nn import functional as F
from torchvision.datasets import MNIST
from torchvision import transforms
from torch.utils.data import DataLoader, random_split
from torchmetrics import functional as MF

# datasets
data_dir = './dataset'
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
mnist_full = MNIST(data_dir, train=True, transform=transform, download=True)
train_ds, val_ds, _ = random_split(mnist_full, [5000, 5000, 50000])
test_ds = MNIST(data_dir, train=False, transform=transform, download=True)

# dataloaders
train_dl = DataLoader(train_ds, batch_size=32)
val_dl = DataLoader(val_ds, batch_size=32)
test_dl = DataLoader(test_ds, batch_size=32)

# pytorch model
channels, width, height = (1, 28, 28)
model = nn.Sequential(
    nn.Flatten(),
    nn.Linear(channels * width * height, 64),
    nn.ReLU(),
    nn.Dropout(0.1),
    nn.Linear(64, 64),
    nn.ReLU(),
    nn.Dropout(0.1),
    nn.Linear(64, 10)
)

def acc(preds, targets):
    return MF.accuracy(preds, targets, task='multiclass', num_classes=10)

@rename('')
def multi_metrics(preds, targets):
    return {
        'p': MF.precision(preds, targets, task='multiclass', num_classes=10),
        'r': MF.recall(preds, targets, task='multiclass', num_classes=10)
        }


checker = CheckCallback('loss', on_stage='val', mode='min', patience=2)
opt = torch.optim.Adam(model.parameters(), lr=2e-4)

trainer = Trainer(model, F.cross_entropy, opt=opt, epochs=100, callbacks=checker, metrics=[acc])

# 应用示例1:自动加载Checkpoint
progress = trainer.fit(train_dl, val_dl, metrics=[multi_metrics], resume=True)
test_rst = trainer.test(test_dl)

# 应用示例2:定义EpochTask任务(一个Dataloader上的训练、验证或测试称为一个任务)
# t1 = EpochTask(train_dl, metrics=[acc])
# t2 = EpochTask(val_dl, metrics=[multi_metrics], do_loss=True)
# progress = trainer.fit(train_tasks=t1, val_tasks=t2)
# test_rst = trainer.test(tasks=t2)

# 应用示例3:多任务训练、验证和测试
# t1 = EpochTask(train_dl, metrics=[acc])
# t2 = EpochTask(val_dl, metrics=[acc, multi_metrics], do_loss=True)
# progress = trainer.fit(train_dl, val_tasks=[t1, t2])
# test_rst = trainer.test(tasks=[t1, t2])

非常规训练流程

  • 方法1:
    • 第1步:继承deepepochs.Callback类,定制满足需要的Callback
    • 第2步:使用deepepochs.Trainer训练模型,将定制的Callback对象作为Trainercallbacks参数
  • 方法2:
    • 第1步:继承deepepochs.TrainerBase类,定制满足需要的Trainer,实现steptrain_stepval_steptest_stepevaluate_step方法
      • 这些方法有三个参数
        • batch_x: 一个mini-batch的模型输入数据
        • batch_y: 一个mini-batch的标签
        • **step_args:可变参数字典,包含do_lossmetrics等参数
      • 返回值为字典
        • key:指标名称
        • value:DeepEpochs.PatchBase子类对象,可用的Patch有
          • ValuePatch: 根据每个batch指标均值(提前计算好)和batch_size,累积计算Epoch指标均值
          • TensorPatch: 保存每个batch模型预测输出及标签,根据指定指标函数累积计算Epoch指标均值
          • MeanPatch: 保存每个batch指标均值,根据指定指标函数累积计算Epoch指标均值
          • ConfusionPatch:累积计算基于混淆矩阵的指标
          • 也可以继承PatchBase定义新的Patch(存在复杂指标运算的情况下)
            • PatchBase.add方法
            • PatchBase.forward方法
    • 第2步:调用定制Trainer训练模型。
  • 方法3:
    • 第1步:继承deepepochs.EpochTask类,在其中定义steptrain_stepval_steptest_stepevaluate_step方法
      • 它们的定义方式与Trainer中的*step方法相同
      • step方法优先级最高,即可用于训练也可用于验证和测试(定义了step方法,其他方法就会失效)
      • val_steptest_step优先级高于evaluate_step方法
      • EpochTask中的*_step方法优先级高于Trainer中的*_step方法
    • 第2步:使用新的EpochTask任务进行训练
      • EpochTask对象作为Trainer.fittrain_tasksval_tasks的参数值,或者Trainer.test方法中tasks的参数值

数据流图

https://github.com/hitlic/deepepochs/blob/main/imgs/data_flow.png

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

deepepochs-0.3.7.tar.gz (107.6 kB view details)

Uploaded Source

Built Distribution

deepepochs-0.3.7-py3-none-any.whl (20.5 kB view details)

Uploaded Python 3

File details

Details for the file deepepochs-0.3.7.tar.gz.

File metadata

  • Download URL: deepepochs-0.3.7.tar.gz
  • Upload date:
  • Size: 107.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.6.0 importlib_metadata/4.8.2 pkginfo/1.8.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.8.12

File hashes

Hashes for deepepochs-0.3.7.tar.gz
Algorithm Hash digest
SHA256 5c179c3170a42a5c4fa7969ecda7c65659932eb744e35b26827dfe05caa410c9
MD5 a4efad52bc4c43ac6c84466b93a13cb5
BLAKE2b-256 38ca8ac8a15622a943dae3f6afca8883ed4cafa2a3d86bd67d05ee05fb1a26c7

See more details on using hashes here.

File details

Details for the file deepepochs-0.3.7-py3-none-any.whl.

File metadata

  • Download URL: deepepochs-0.3.7-py3-none-any.whl
  • Upload date:
  • Size: 20.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.6.0 importlib_metadata/4.8.2 pkginfo/1.8.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.8.12

File hashes

Hashes for deepepochs-0.3.7-py3-none-any.whl
Algorithm Hash digest
SHA256 800197d68649935a2bfb394ff787c3346402a1691321251c91fef3d3b270c3d9
MD5 f0d498f9dce61ce9ef73ab0d24621172
BLAKE2b-256 b35a48e9453a3bc919d2cbaf98b43aefc34222090fd0f1b20903b8fcc69daaaf

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page