Skip to main content

PyTorch Lightning is the lightweight PyTorch wrapper for ML researchers. Scale your models. Write less boilerplate.

Project description

The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.


WebsiteKey FeaturesHow To UseDocsExamplesCommunityGrid AILicense

PyPI - Python Version PyPI Status PyPI Status Conda DockerHub codecov

ReadTheDocs Slack license

*Codecov is > 90%+ but build delays may show less

PyTorch Lightning is just organized PyTorch

Lightning disentangles PyTorch code to decouple the science from the engineering. PT to PL


Lightning Design Philosophy

Lightning structures PyTorch code with these principles:

Lightning forces the following structure to your code which makes it reusable and shareable:

  • Research code (the LightningModule).
  • Engineering code (you delete, and is handled by the Trainer).
  • Non-essential research code (logging, etc... this goes in Callbacks).
  • Data (use PyTorch DataLoaders or organize them into a LightningDataModule).

Once you do this, you can train on multiple-GPUs, TPUs, CPUs, IPUs, HPUs and even in 16-bit precision without changing your code!

Get started in just 15 minutes


Continuous Integration

Lightning is rigorously tested across multiple CPUs, GPUs, TPUs, IPUs, and HPUs and against major Python and PyTorch versions.

Current build statuses
System / PyTorch ver. 1.9 1.10 1.12 (latest)
Linux py3.7 [GPUs**] - - -
Linux py3.7 [TPUs***] CircleCI - -
Linux py3.8 [IPUs] Build Status - -
Linux py3.8 [HPUs] - Build Status -
Linux py3.8 (with Conda) Test Test -
Linux py3.9 (with Conda) - - Test
Linux py3.{7,9} - - Test
OSX py3.{7,9} - - Test
Windows py3.{7,9} - - Test
  • ** tests run on two NVIDIA P100
  • *** tests run on Google GKE TPUv2/3. TPU py3.7 means we support Colab and Kaggle env.

How To Use

Step 0: Install

Simple installation from PyPI

pip install pytorch-lightning

Step 1: Add these imports

import os
import torch
from torch import nn
import torch.nn.functional as F
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader, random_split
from torchvision import transforms
import pytorch_lightning as pl

Step 2: Define a LightningModule (nn.Module subclass)

A LightningModule defines a full system (ie: a GAN, autoencoder, BERT or a simple Image Classifier).

class LitAutoEncoder(pl.LightningModule):
    def __init__(self):
        super().__init__()
        self.encoder = nn.Sequential(nn.Linear(28 * 28, 128), nn.ReLU(), nn.Linear(128, 3))
        self.decoder = nn.Sequential(nn.Linear(3, 128), nn.ReLU(), nn.Linear(128, 28 * 28))

    def forward(self, x):
        # in lightning, forward defines the prediction/inference actions
        embedding = self.encoder(x)
        return embedding

    def training_step(self, batch, batch_idx):
        # training_step defines the train loop. It is independent of forward
        x, y = batch
        x = x.view(x.size(0), -1)
        z = self.encoder(x)
        x_hat = self.decoder(z)
        loss = F.mse_loss(x_hat, x)
        self.log("train_loss", loss)
        return loss

    def configure_optimizers(self):
        optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
        return optimizer

Note: Training_step defines the training loop. Forward defines how the LightningModule behaves during inference/prediction.

Step 3: Train!

dataset = MNIST(os.getcwd(), download=True, transform=transforms.ToTensor())
train, val = random_split(dataset, [55000, 5000])

autoencoder = LitAutoEncoder()
trainer = pl.Trainer()
trainer.fit(autoencoder, DataLoader(train), DataLoader(val))

Advanced features

Lightning has over 40+ advanced features designed for professional AI research at scale.

Here are some examples:

Highlighted feature code snippets
# 8 GPUs
# no code changes needed
trainer = Trainer(max_epochs=1, accelerator="gpu", devices=8)

# 256 GPUs
trainer = Trainer(max_epochs=1, accelerator="gpu", devices=8, num_nodes=32)
Train on TPUs without code changes
# no code changes needed
trainer = Trainer(accelerator="tpu", devices=8)
16-bit precision
# no code changes needed
trainer = Trainer(precision=16)
Experiment managers
from pytorch_lightning import loggers

# tensorboard
trainer = Trainer(logger=TensorBoardLogger("logs/"))

# weights and biases
trainer = Trainer(logger=loggers.WandbLogger())

# comet
trainer = Trainer(logger=loggers.CometLogger())

# mlflow
trainer = Trainer(logger=loggers.MLFlowLogger())

# neptune
trainer = Trainer(logger=loggers.NeptuneLogger())

# ... and dozens more
EarlyStopping
es = EarlyStopping(monitor="val_loss")
trainer = Trainer(callbacks=[es])
Checkpointing
checkpointing = ModelCheckpoint(monitor="val_loss")
trainer = Trainer(callbacks=[checkpointing])
Export to torchscript (JIT) (production use)
# torchscript
autoencoder = LitAutoEncoder()
torch.jit.save(autoencoder.to_torchscript(), "model.pt")
Export to ONNX (production use)
# onnx
with tempfile.NamedTemporaryFile(suffix=".onnx", delete=False) as tmpfile:
    autoencoder = LitAutoEncoder()
    input_sample = torch.randn((1, 64))
    autoencoder.to_onnx(tmpfile.name, input_sample, export_params=True)
    os.path.isfile(tmpfile.name)

Pro-level control of training loops (advanced users)

For complex/professional level work, you have optional full control of the training loop and optimizers.

class LitAutoEncoder(pl.LightningModule):
    def __init__(self):
        super().__init__()
        self.automatic_optimization = False

    def training_step(self, batch, batch_idx):
        # access your optimizers with use_pl_optimizer=False. Default is True
        opt_a, opt_b = self.optimizers(use_pl_optimizer=True)

        loss_a = ...
        self.manual_backward(loss_a, opt_a)
        opt_a.step()
        opt_a.zero_grad()

        loss_b = ...
        self.manual_backward(loss_b, opt_b, retain_graph=True)
        self.manual_backward(loss_b, opt_b)
        opt_b.step()
        opt_b.zero_grad()

Advantages over unstructured PyTorch

  • Models become hardware agnostic
  • Code is clear to read because engineering code is abstracted away
  • Easier to reproduce
  • Make fewer mistakes because lightning handles the tricky engineering
  • Keeps all the flexibility (LightningModules are still PyTorch modules), but removes a ton of boilerplate
  • Lightning has dozens of integrations with popular machine learning tools.
  • Tested rigorously with every new PR. We test every combination of PyTorch and Python supported versions, every OS, multi GPUs and even TPUs.
  • Minimal running speed overhead (about 300 ms per epoch compared with pure PyTorch).

Lightning Lite

In the PyTorch Lightning 1.5 release, LightningLite now enables you to leverage all the capabilities of PyTorch Lightning Accelerators without any refactoring to your training loop. Check out the blogpost and docs for more info.


Examples

Hello world
Contrastive Learning
NLP
Reinforcement Learning
Vision
Classic ML

Community

The PyTorch Lightning community is maintained by

  • 10+ core contributors who are all a mix of professional engineers, Research Scientists, and Ph.D. students from top AI labs.
  • 680+ active community contributors.

Want to help us build Lightning and reduce boilerplate for thousands of researchers? Learn how to make your first contribution here

PyTorch Lightning is also part of the PyTorch ecosystem which requires projects to have solid testing, documentation and support.

Asking for help

If you have any questions please:

  1. Read the docs.
  2. Search through existing Discussions, or add a new question
  3. Join our Slack community.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytorch-lightning-1.7.2.tar.gz (520.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pytorch_lightning-1.7.2-py3-none-any.whl (705.6 kB view details)

Uploaded Python 3

File details

Details for the file pytorch-lightning-1.7.2.tar.gz.

File metadata

  • Download URL: pytorch-lightning-1.7.2.tar.gz
  • Upload date:
  • Size: 520.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.10

File hashes

Hashes for pytorch-lightning-1.7.2.tar.gz
Algorithm Hash digest
SHA256 76e4d1af70721fc9a294641668c905e2db76e866f7bf07a5e37f72fa3cb87141
MD5 31d657073c007e02b9d4d825cc1b40fd
BLAKE2b-256 f1764bf9f28439e4fa809e29c719288b73f474673f207a89627a53de76a80f4f

See more details on using hashes here.

File details

Details for the file pytorch_lightning-1.7.2-py3-none-any.whl.

File metadata

File hashes

Hashes for pytorch_lightning-1.7.2-py3-none-any.whl
Algorithm Hash digest
SHA256 faea45653bb759ee2e5c5e26491bd144260de2b4d87f37ebe8b3cc2a9b801b76
MD5 18fb1b90b7395a4f8aee2328555a4b1a
BLAKE2b-256 a13740a0ade0d4bf9abcdae109ec6f306be0e5a8d5d19ae8adce811cbcacedce

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page