Skip to main content

A deep learning package for self-supervised learning

Project description

Lightly SSL self-supervised learning Logo

GitHub Unit Tests PyPI Downloads Code style: black Discord

Lightly SSL is a computer vision framework for self-supervised learning.

We've also built a whole platform on top, with additional features for active learning and data curation. If you're interested in the Lightly Worker Solution to easily process millions of samples and run powerful algorithms on your data, check out lightly.ai. It's free to get started!

Features

This self-supervised learning framework offers the following features:

  • Modular framework, which exposes low-level building blocks such as loss functions and model heads.
  • Easy to use and written in a PyTorch like style.
  • Supports custom backbone models for self-supervised pre-training.
  • Support for distributed training using PyTorch Lightning.

Supported Models

You can find sample code for all the supported models here. We provide PyTorch, PyTorch Lightning, and PyTorch Lightning distributed examples for all models to kickstart your project.

Models:

Tutorials

Want to jump to the tutorials and see Lightly in action?

Community and partner projects:

Quick Start

Lightly requires Python 3.6+ but we recommend using Python 3.7+. We recommend installing Lightly in a Linux or OSX environment.

Dependencies

Lightly is compatible with PyTorch and PyTorch Lightning v2.0+!

Vision transformer based models require Torchvision v0.12+.

Installation

You can install Lightly and its dependencies from PyPI with:

pip3 install lightly

We strongly recommend that you install Lightly in a dedicated virtualenv, to avoid conflicting with your system packages.

Lightly in Action

With Lightly, you can use the latest self-supervised learning methods in a modular way using the full power of PyTorch. Experiment with different backbones, models, and loss functions. The framework has been designed to be easy to use from the ground up. Find more examples in our docs.

import torch
import torchvision

from lightly import loss
from lightly import transforms
from lightly.data import LightlyDataset
from lightly.models.modules import heads


# Create a PyTorch module for the SimCLR model.
class SimCLR(torch.nn.Module):
    def __init__(self, backbone):
        super().__init__()
        self.backbone = backbone
        self.projection_head = heads.SimCLRProjectionHead(
            input_dim=512,  # Resnet18 features have 512 dimensions.
            hidden_dim=512,
            output_dim=128,
        )

    def forward(self, x):
        features = self.backbone(x).flatten(start_dim=1)
        z = self.projection_head(features)
        return z


# Use a resnet backbone.
backbone = torchvision.models.resnet18()
# Ignore the classification head as we only want the features.
backbone.fc = torch.nn.Identity()

# Build the SimCLR model.
model = SimCLR(backbone)

# Prepare transform that creates multiple random views for every image.
transform = transforms.SimCLRTransform(input_size=32, cj_prob=0.5)


# Create a dataset from your image folder.
dataset = data.LightlyDataset(input_dir="./my/cute/cats/dataset/", transform=transform)

# Build a PyTorch dataloader.
dataloader = torch.utils.data.DataLoader(
    dataset,  # Pass the dataset to the dataloader.
    batch_size=128,  # A large batch size helps with the learning.
    shuffle=True,  # Shuffling is important!
)

# Lightly exposes building blocks such as loss functions.
criterion = loss.NTXentLoss(temperature=0.5)

# Get a PyTorch optimizer.
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, weight_decay=1e-6)

# Train the model.
for epoch in range(10):
    for (view0, view1), targets, filenames in dataloader:
        z0 = model(view0)
        z1 = model(view1)
        loss = criterion(z0, z1)
        loss.backward()
        optimizer.step()
        optimizer.zero_grad()
        print(f"loss: {loss.item():.5f}")

You can easily use another model like SimSiam by swapping the model and the loss function.

# PyTorch module for the SimSiam model.
class SimSiam(torch.nn.Module):
    def __init__(self, backbone):
        super().__init__()
        self.backbone = backbone
        self.projection_head = heads.SimSiamProjectionHead(512, 512, 128)
        self.prediction_head = heads.SimSiamPredictionHead(128, 64, 128)

    def forward(self, x):
        features = self.backbone(x).flatten(start_dim=1)
        z = self.projection_head(features)
        p = self.prediction_head(z)
        z = z.detach()
        return z, p


model = SimSiam(backbone)

# Use the SimSiam loss function.
criterion = loss.NegativeCosineSimilarity()

You can find a more complete example for SimSiam here.

Use PyTorch Lightning to train the model:

from pytorch_lightning import LightningModule, Trainer

class SimCLR(LightningModule):
    def __init__(self):
        super().__init__()
        resnet = torchvision.models.resnet18()
        resnet.fc = torch.nn.Identity()
        self.backbone = resnet
        self.projection_head = heads.SimCLRProjectionHead(512, 512, 128)
        self.criterion = loss.NTXentLoss()

    def forward(self, x):
        features = self.backbone(x).flatten(start_dim=1)
        z = self.projection_head(features)
        return z

    def training_step(self, batch, batch_index):
        (view0, view1), _, _ = batch
        z0 = self.forward(view0)
        z1 = self.forward(view1)
        loss = self.criterion(z0, z1)
        return loss

    def configure_optimizers(self):
        optim = torch.optim.SGD(self.parameters(), lr=0.06)
        return optim


model = SimCLR()
trainer = Trainer(max_epochs=10, devices=1, accelerator="gpu")
trainer.fit(model, dataloader)

See our docs for a full PyTorch Lightning example.

Or train the model on 4 GPUs:

# Use distributed version of loss functions.
criterion = loss.NTXentLoss(gather_distributed=True)

trainer = Trainer(
    max_epochs=10,
    devices=4,
    accelerator="gpu",
    strategy="ddp",
    sync_batchnorm=True,
    use_distributed_sampler=True,  # or replace_sampler_ddp=True for PyTorch Lightning <2.0
)
trainer.fit(model, dataloader)

We provide multi-GPU training examples with distributed gather and synchronized BatchNorm. Have a look at our docs regarding distributed training.

Benchmarks

Implemented models and their performance on various datasets. Hyperparameters are not tuned for maximum accuracy. For detailed results and more info about the benchmarks click here.

Imagenet

The following experiments have been conducted on a system with 2x4090 GPUs. Training a model takes around 4 days for 100 epochs (35 min per epoch), including kNN, linear probing, and fine-tuning evaluation.

Note: Evaluation settings are based on these papers:

See the benchmarking scripts for details.

Model Backbone Batch Size Epochs Linear Top1 Finetune Top1 KNN Top1 Tensorboard Checkpoint
BarlowTwins Res50 256 100 62.9 72.6 45.6 link link
BYOL Res50 256 100 62.4 74.0 45.6 link link
DINO Res50 128 100 68.2 72.5 49.9 link link
SimCLR* Res50 256 100 63.2 73.9 44.8 link link
SimCLR* + DCL Res50 256 100 65.1 73.5 49.6 link link
SimCLR* + DCLW Res50 256 100 64.5 73.2 48.5 link link
SwAV Res50 256 100 67.2 75.4 49.5 link link
VICReg Res50 256 100 63.0 73.7 46.3 link link

*We use square root learning rate scaling instead of linear scaling as it yields better results for smaller batch sizes. See Appendix B.1 in SimCLR paper.

ImageNette

Model Backbone Batch Size Epochs KNN Top1
BarlowTwins Res18 256 800 0.852
BYOL Res18 256 800 0.887
DCL Res18 256 800 0.861
DCLW Res18 256 800 0.865
DINO Res18 256 800 0.888
FastSiam Res18 256 800 0.873
MAE ViT-S 256 800 0.610
MSN ViT-S 256 800 0.828
Moco Res18 256 800 0.874
NNCLR Res18 256 800 0.884
PMSN ViT-S 256 800 0.822
SimCLR Res18 256 800 0.889
SimMIM ViT-B32 256 800 0.343
SimSiam Res18 256 800 0.872
SwaV Res18 256 800 0.902
SwaVQueue Res18 256 800 0.890
SMoG Res18 256 800 0.788
TiCo Res18 256 800 0.856
VICReg Res18 256 800 0.845
VICRegL Res18 256 800 0.778

Cifar10

Model Backbone Batch Size Epochs KNN Top1
BarlowTwins Res18 512 800 0.859
BYOL Res18 512 800 0.910
DCL Res18 512 800 0.874
DCLW Res18 512 800 0.871
DINO Res18 512 800 0.848
FastSiam Res18 512 800 0.902
Moco Res18 512 800 0.899
NNCLR Res18 512 800 0.892
SimCLR Res18 512 800 0.879
SimSiam Res18 512 800 0.904
SwaV Res18 512 800 0.884
SMoG Res18 512 800 0.800

Terminology

Below you can see a schematic overview of the different concepts in the package. The terms in bold are explained in more detail in our documentation.

Overview of the Lightly pip package

Next Steps

Head to the documentation and see the things you can achieve with Lightly!

Development

To install dev dependencies (for example to contribute to the framework) you can use the following command:

pip3 install -e ".[dev]"

For more information about how to contribute have a look here.

Running Tests

Unit tests are within the tests directory and we recommend running them using pytest. There are two test configurations available. By default, only a subset will be run:

make test-fast

To run all tests (including the slow ones) you can use the following command:

make test

To test a specific file or directory use:

pytest <path to file or directory>

Code Formatting

To format code with black and isort run:

make format

Further Reading

Self-Supervised Learning:

FAQ

  • Why should I care about self-supervised learning? Aren't pre-trained models from ImageNet much better for transfer learning?

    • Self-supervised learning has become increasingly popular among scientists over the last years because the learned representations perform extraordinarily well on downstream tasks. This means that they capture the important information in an image better than other types of pre-trained models. By training a self-supervised model on your dataset, you can make sure that the representations have all the necessary information about your images.
  • How can I contribute?

    • Create an issue if you encounter bugs or have ideas for features we should implement. You can also add your own code by forking this repository and creating a PR. More details about how to contribute with code is in our contribution guide.
  • Is this framework for free?

    • Yes, this framework is completely free to use and we provide the source code. We believe that we need to make training deep learning models more data efficient to achieve widespread adoption. One step to achieve this goal is by leveraging self-supervised learning. The company behind Lightly is committed to keep this framework open-source.
  • If this framework is free, how is the company behind Lightly making money?

    • Training self-supervised models is only one part of our solution. The company behind Lightly focuses on processing and analyzing embeddings created by self-supervised models. By building, what we call a self-supervised active learning loop we help companies understand and work with their data more efficiently. As the Lightly Solution is a freemium product, you can try it out for free. However, we will charge for some features.
    • In any case this framework will always be free to use, even for commercial purposes.

Lightly in Research

Company behind this Open Source Framework

Lightly is a spin-off from ETH Zurich that helps companies build efficient active learning pipelines to select the most relevant data for their models.

You can find out more about the company and it's services by following the links below:

BibTeX

If you want to cite the framework feel free to use this:

@article{susmelj2020lightly,
  title={Lightly},
  author={Igor Susmelj and Matthias Heller and Philipp Wirth and Jeremy Prescott and Malte Ebner et al.},
  journal={GitHub. Note: https://github.com/lightly-ai/lightly},
  year={2020}
}

Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lightly-1.4.22.tar.gz (303.0 kB view details)

Uploaded Source

Built Distribution

lightly-1.4.22-py3-none-any.whl (669.1 kB view details)

Uploaded Python 3

File details

Details for the file lightly-1.4.22.tar.gz.

File metadata

  • Download URL: lightly-1.4.22.tar.gz
  • Upload date:
  • Size: 303.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.13

File hashes

Hashes for lightly-1.4.22.tar.gz
Algorithm Hash digest
SHA256 bb261a2cf225e38654e0d26159cf7c30dd278e82e165ecb2e2d7564e53e71adb
MD5 3579e3b94fe735961df10060b2cfb073
BLAKE2b-256 f895d718bd82f863139c43924fd8999ba8f0d7587a829c619c9ca267f383415e

See more details on using hashes here.

File details

Details for the file lightly-1.4.22-py3-none-any.whl.

File metadata

  • Download URL: lightly-1.4.22-py3-none-any.whl
  • Upload date:
  • Size: 669.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.13

File hashes

Hashes for lightly-1.4.22-py3-none-any.whl
Algorithm Hash digest
SHA256 3e19c494663ddcc82b48846ba8acda18b5c1047e553a4e7c1ce2a761d52714e4
MD5 98aa7d88455a9c55a7c1d26e1ab850bf
BLAKE2b-256 7278a5415f3668358506049ab7cf163ce2231a3eafa3a5d5de644fb821782c91

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page