Skip to main content

TorchFit is a simple, easy-to-use, and minimalistic training-helper for PyTorch

Project description

TorchFit

TorchFit is a bare-bones, minimalistic training-helper for PyTorch that exposes an easy-to-use fit method in the style of fastai and Keras.

TorchFit is intended to be minimally-invasive with a tiny footprint and as little bloat as possible. It is well-suited to those that are new to training models in PyTorch.

Usage

# normal PyTorch stuff
train_loader = create_your_training_data_loader()
val_loader = create_your_validation_data_loader()
test_loader = create_your_test_data_loader()
model = create_your_pytorch_model()

# wrap model and data in Learner
import torchfit
learner = torchfit.Learner(model, train_loader, val_loader=val_loader)

# estimate LR using Learning Rate Finder
learner.find_lr()

# train using 1cycle learning rate policy
learner.fit_onecycle(1e-4, 3)

# plot training vs. validation loss
learner.plot('loss')

# make predictions as easy as in Keras
y_pred = learner.predict(test_loader)

# save model and reload later
learner.save('/tmp/mymodel')
learer.load('/tmp/mymodel')

TorchFit Training Loop

Tutorials and Examples

Features

Learning Rate Finder

learner.find_lr()

A fit method for Training

# Examples
learner.fit(lr, epochs)
learner.fit_onecycle(lr, epochs)
learner.fit(lr, epochs, schedulers=[scheduler])

Easy-to-Execute Testing and Predictions

# Examples
outputs = learner.predict(test_loader)
outputs, targets = learner.predict(test_loader, return_targets=True)

text = 'Shares of IBM rose today.'
predicted_label = learner.predict_example(text, preproc_fn=preprocess, labels=labels)

Gradient Accumulation

learner.fit_onecycle(lr, 1, accumulation_steps=8)

Gradient Clipping

learner.fit_onecycle(lr, 1, gradient_clip_val=1)

Mixed Precision Training

torchfit.Learner(model, train_loader, val_loader=val_loader, use_amp=True, amp_level='O2')

Multi-GPU Training and GPU Selection

To train on first two GPUs (0 and 1):

learner = torchfit.Learner(model, train_loader, val_loader=test_loader, gpus=[0,1])

To train only on the second GPU, one can do either this:

learner = torchfit.Learner(model, train_loader, val_loader=test_loader, gpus=[1])

or this...

learner = torchfit.Learner(model, train_loader, val_loader=test_loader, device='cuda:1')

Resetting Weights of Model

learner.reset_weights()

Saving/Loading Model

learner.save('/tmp/mymodel')
learner.load('/tmp/mymodel')

Installation

After ensuring PyTorch is installed, install TorchFit with:

pip3 install torchfit

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torchfit-0.2.5.tar.gz (16.0 kB view details)

Uploaded Source

File details

Details for the file torchfit-0.2.5.tar.gz.

File metadata

  • Download URL: torchfit-0.2.5.tar.gz
  • Upload date:
  • Size: 16.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/42.0.2 requests-toolbelt/0.9.1 tqdm/4.24.0 CPython/3.6.8

File hashes

Hashes for torchfit-0.2.5.tar.gz
Algorithm Hash digest
SHA256 44a08e3c2e978a62a629b75e533617d3ef5c6a7b20fc31533787c290472864d9
MD5 144f382ae6f9445cd080012e300eb8ac
BLAKE2b-256 2353adc1e69f9d3d07d16bebf8a6ecfc1b7c8d54947a7ac71b02e5a66acde934

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page