Skip to main content

torchlit - thin wrappers for Pytorch

Project description

torchlit

torchlit is an in progress collection of Pytorch utilities and thin wrappers which can be used for various purposes.

With every project, I intend to add functionalities that are fairly genralized to be put as a boilerplate for different utilities.

It allows you to write less code while focusing on the model itself, rather than its verbosity and how the model is retrieved. Along with this, it consists of data utilities which can be used for purposes of loading data from dataframe, or from a folder for inference, etc.

Sample usage

!pip install torchlit --q
    |████████████████████████████████| 911kB 5.4MB/s
    |████████████████████████████████| 102kB 7.3MB/s
    |████████████████████████████████| 81kB 6.7MB/s
    |████████████████████████████████| 7.6MB 9.3MB/s
    |████████████████████████████████| 81kB 7.4MB/s
    |████████████████████████████████| 102kB 9.5MB/s
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader, Dataset

import torchlit
class Net(torchlit.Model):
    def __init__(self):
        super(Net, self).__init__(F.cross_entropy, record=True, verbose=True)
        self.conv1 = nn.Conv2d(3, 6, 3)
        self.conv2 = nn.Conv2d(6, 12, 3)
        self.flatten = nn.Flatten()
        self.lin = nn.Linear(184512, 10)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        x = F.relu(self.conv2(x))
        x = self.flatten(x)
        return self.lin(x)

model = Net()
model
    Net(
      (conv1): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
      (conv2): Conv2d(6, 12, kernel_size=(3, 3), stride=(1, 1))
      (flatten): Flatten(start_dim=1, end_dim=-1)
      (lin): Linear(in_features=184512, out_features=10, bias=True)
    )
train_ds = [(x, y) for x,y in zip(torch.randn((10, 3, 128, 128)), torch.randint(0, 10, (10,)))]
val_ds = [(x,y) for x,y in zip(torch.randn((3, 3, 128, 128)), torch.randint(0, 10, (3,)))]

train_dl = DataLoader(train_ds)
val_dl = DataLoader(val_ds)
EPOCHS = 10
model = Net()

for epoch in range(EPOCHS):
    for xb in train_dl:
        model.train_step(xb)

    for xb in val_dl:
        model.val_step(xb)

    model.epoch_end()

    Epoch [0]: train_loss: 2.3065271377563477, val_loss: 2.3060548305511475, val_acc: 0.0
    Epoch [1]: train_loss: 2.3065271377563477, val_loss: 2.3060548305511475, val_acc: 0.0
    Epoch [2]: train_loss: 2.3065271377563477, val_loss: 2.3060548305511475, val_acc: 0.0
    Epoch [3]: train_loss: 2.3065271377563477, val_loss: 2.3060548305511475, val_acc: 0.0
    Epoch [4]: train_loss: 2.3065271377563477, val_loss: 2.3060548305511475, val_acc: 0.0
    Epoch [5]: train_loss: 2.3065271377563477, val_loss: 2.3060548305511475, val_acc: 0.0
    Epoch [6]: train_loss: 2.3065271377563477, val_loss: 2.3060548305511475, val_acc: 0.0
    Epoch [7]: train_loss: 2.3065271377563477, val_loss: 2.3060548305511475, val_acc: 0.0
    Epoch [8]: train_loss: 2.3065271377563477, val_loss: 2.3060548305511475, val_acc: 0.0
    Epoch [9]: train_loss: 2.3065271377563477, val_loss: 2.3060548305511475, val_acc: 0.0
model.history
[{'epoch': 0,
  'train_loss': 2.3326268196105957,
  'val_acc': 0.0,
  'val_loss': 2.2232437133789062},
 {'epoch': 1,
  'train_loss': 2.3326268196105957,
  'val_acc': 0.0,
  'val_loss': 2.2232437133789062},
 {'epoch': 2,
  'train_loss': 2.3326268196105957,
  'val_acc': 0.0,
  'val_loss': 2.2232437133789062},
 {'epoch': 3,
  'train_loss': 2.3326268196105957,
  'val_acc': 0.0,
  'val_loss': 2.2232437133789062},
 {'epoch': 4,
  'train_loss': 2.3326268196105957,
  'val_acc': 0.0,
  'val_loss': 2.2232437133789062},
 {'epoch': 5,
  'train_loss': 2.3326268196105957,
  'val_acc': 0.0,
  'val_loss': 2.2232437133789062},
 {'epoch': 6,
  'train_loss': 2.3326268196105957,
  'val_acc': 0.0,
  'val_loss': 2.2232437133789062},
 {'epoch': 7,
  'train_loss': 2.3326268196105957,
  'val_acc': 0.0,
  'val_loss': 2.2232437133789062},
 {'epoch': 8,
  'train_loss': 2.3326268196105957,
  'val_acc': 0.0,
  'val_loss': 2.2232437133789062},
 {'epoch': 9,
  'train_loss': 2.3326268196105957,
  'val_acc': 0.0,
  'val_loss': 2.2232437133789062}]

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torchlit-0.1.4.tar.gz (6.5 kB view details)

Uploaded Source

Built Distribution

torchlit-0.1.4-py3-none-any.whl (7.0 kB view details)

Uploaded Python 3

File details

Details for the file torchlit-0.1.4.tar.gz.

File metadata

  • Download URL: torchlit-0.1.4.tar.gz
  • Upload date:
  • Size: 6.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/49.2.1 requests-toolbelt/0.9.1 tqdm/4.58.0 CPython/3.9.1

File hashes

Hashes for torchlit-0.1.4.tar.gz
Algorithm Hash digest
SHA256 3e1bf3d11aaaf6741fe1e5a867d9529404b008655f52388da9ca26518737ce1c
MD5 df8f46aaeeab35f226ad5fce8560dd83
BLAKE2b-256 d6050dd924eac596185fefed63ff9b35b9e564f11872edded70fa818435f642c

See more details on using hashes here.

File details

Details for the file torchlit-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: torchlit-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 7.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/49.2.1 requests-toolbelt/0.9.1 tqdm/4.58.0 CPython/3.9.1

File hashes

Hashes for torchlit-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 f58144401ff259ef1062620e0897aab356b31aab4245fe08075fb51d9b31e54f
MD5 c380e239d575be15a43fd4104f707f4d
BLAKE2b-256 5a3d3000103a3f7034d4f48ee70de6fb273f9adaf5285a27abff55de5f8773bc

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page