Skip to main content

A self-supervision PyTorch framework

Project description

Super Selfish

A unified Pytorch framework for image-based self-supervised learning. The technical report can be found at https://arxiv.org/abs/2012.02706.

If you use this framework in one of your projects please consider to cite

@misc{wagner2020superselfish,
      title={Super-Selfish: Self-Supervised Learning on Images with PyTorch}, 
      author={Nicolas Wagner and Anirban Mukhopadhyay},
      year={2020},
      eprint={2012.02706},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Algorithms

Currently support of 13 algorithms that can be run in parallel on one node of GPUs:

Patch-based

  • ExemplarNet https://arxiv.org/abs/1406.6909
    We use the stronger set of augmentations used in CPC and do not use gradient-based patch sampling as this does not seem to be neccessary. We always process full images but apply scaling and translation.
  • RotateNet https://arxiv.org/abs/1803.07728
  • Jigsaw Puzzle https://arxiv.org/abs/1603.09246
    We apply random cropping within each patch to avoid border signals.
    3x3 jigsaw too complicated for easy dataset, per default 2x2.
    Jigsaw processed at once for performance and simplicity.

Predictive

Generative

Contrastive

  • Instance Discrimination https://arxiv.org/pdf/1805.01978.pdf
    (Memory Bank, We made it Augmentation Task with CPC Augs, Only Projection head, 1 Backbone, Temperature)
  • Contrastive Predictive Coding (V2) https://arxiv.org/pdf/1905.09272.pdf
    (Batchwise, Future Prediction Task with augmentation, Target and Projection head, 1 Backbone, No Temperature)
  • Momentum Contrast (V2) https://arxiv.org/pdf/2003.04297.pdf
    (Queue, Augmentation Task, Projection Head, 1 Backbone and Momentum Encoder, Temperature) LayerNorm instead of ShuffledBN (on todo list)
  • Contrastive Multiview Coding https://arxiv.org/pdf/1906.05849.pdf
    (Memory Bank, Augmentation Task (We use CPC Aufs), Multimodal,Target and Projection head, 2 Backbones, No Temperature) Features Only from L channel as in theory, the embeddings should be close anyway
  • Boostrap Your Own Latent (CL via BN) https://arxiv.org/pdf/2006.07733.pdf
    (No negatives, Augmentation task, Target and Projection head, 2 Backbones,No Temperature)
  • PIRL https://arxiv.org/abs/1912.01991
    (Memory Bank, Augmentation + Jigsaw Task, Target and Projection Head, 1 Backbone, Temperature)
    Jigsaw processed at once for performance and simplicity.

Usage

Requirements

Tested with
CUDA 11.0 and Ubuntu 18.04
torch 1.7.0 torchvision 0.8.1
scikit-image 0.17.2
elasticdeform 0.4.6
tqdm 4.51.0
scipy 1.5.4
colorama 0.4.4

Per default Super Selfish stores network parameters in the folder "store" in your directory and looks for a "dataset" folder.

Install

pip install super-selfish

Training

For usage examples of all algorithms see test.py file.
Be aware that pretext difficulty has to be adapted to your task and dataset.
Further, contrastive methods mostly rely on enourmus batch sizes and mostly need a Multi-GPU setup. Momentum Contrast typically also works with small batch sizes due to the queued structure.

Training is as easy as:

# Choose supervisor
supervisor = RotateNetSupervisor(train_dataset) # .to('cuda')

# supervisor = RotateNetSupervisor(train_dataset)
# supervisor = ExemplarNetSupervisor(train_dataset)
# supervisor = JigsawNetSupervisor(train_dataset)
# supervisor = DenoiseNetSupervisor(train_dataset)
# supervisor = ContextNetSupervisor(train_dataset)
# supervisor = BiGanSupervisor(train_dataset)
# supervisor = SplitBrainNetSupervisor(train_dataset)
# supervisor = ContrastivePredictiveCodingSupervisor(train_dataset)
# supervisor = MomentumContrastSupervisor(train_dataset)
# supervisor = BYOLSupervisor(train_dataset)
# supervisor = InstanceDiscriminationSupervisor(train_dataset)
# supervisor = ContrastiveMultiviewCodingSupervisor(train_dataset)
# supervisor = PIRLSupervisor(train_dataset)

# Start training
supervisor.supervise(lr=1e-3, epochs=50,
                     batch_size=64, name="store/base", pretrained=False)

Feature Extraction and Transfer

The model is automatically stored if the training ends after the given number of epochs or the user manualy interrupts the training process.
If not directly reused in the same run, any model can be loaded with:

supervisor = RotateNetSupervisor().load(name="store/base")

The feature extractor is retrieved using:

# Returns the backbone network i.e. nn.Module
backbone_network = supervisor.get_backbone()

If you want to easily add new prediction head you can create a CombinedNet:

CombinedNet(backbone_network, nn.Module(...)) 

Flexibility

Although training is as easy as writing two lines of code, Super Selfish provides maximum flexibility. Any supervisor can be directly initialized with the corresponding hyperparameters. By default, the hyperparameters from the respective paper are used. Similiarily, the backbone architecture as well as prediction heads are by default those of the papers but can be customized as follows:

supervisor = RotateNetSupervisor(train_dataset, backbone=nn.Module(...), predictor=nn.Module(...)) # .to('cuda')

For individual parameters see Algorithms.

The training can be governed by the learning rate, the used optimizer, the batch size, wether to shuffle training data, and a learning rate schedule. Polyak averaging is soon to be added.

def supervise(self, lr=1e-3, optimizer=torch.optim.Adam, epochs=10, batch_size=32, shuffle=True,
                  num_workers=0, name="store/base", pretrained=False, lr_scheduler=lambda optimizer: torch.optim.lr_scheduler.StepLR(optimizer, step_size=100, gamma=1.0))

The supervise method of any Superviser is splitted into 5 parts such that functionalities can be easily updated/changed through overloading.

# Loading of pretrained weights and models
def _load_pretrained(self, name, pretrained)
# Initialization of training specific objects
def _init_data_optimizer(self, optimizer, batch_size, shuffle, num_workers, collate_fn, lr, lr_scheduler)
# Wraps looping over epochs, batches. Takes care of visualizations and logging.
def _epochs(self, epochs, train_loader, optimizer, lr_scheduler)
# Implements one run of a model and other forward calculations
def _forward(self, data)
# Takes care of updating the modle, lr scheduler, ...
def _update(self, loss, optimizer, lr_scheduler)

The full documentation is available at: TODO

Remarks

  • Super Selfish is constructed to work out of the box on 225x225 images but can be adapted to other resolutions with minor effort. An adaptive design is to follow soon.
  • If not precisley stated in a paper, we use the CPC image augmentations. Some augmentations or implementation details may be different to the original papers as we aim for a comparable unified framework.
  • We use an EfficientNet https://github.com/lukemelas/EfficientNet-PyTorch implementation as the defaul backbone/feature extractor. We use a customized version that can be switched from batch norm to layer norm.
  • Please feel free to open an issue regarding bugs and/or other algorithms that should be added.

TODOs

  • Multi node support, ShuffledBN
  • Refactor old datasets, GANSupervisor
  • Polyak Averaging

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

super-selfish-0.0.6.tar.gz (34.7 kB view details)

Uploaded Source

Built Distribution

super_selfish-0.0.6-py3-none-any.whl (35.7 kB view details)

Uploaded Python 3

File details

Details for the file super-selfish-0.0.6.tar.gz.

File metadata

  • Download URL: super-selfish-0.0.6.tar.gz
  • Upload date:
  • Size: 34.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.23.0 setuptools/50.3.2 requests-toolbelt/0.9.1 tqdm/4.42.1 CPython/3.7.6

File hashes

Hashes for super-selfish-0.0.6.tar.gz
Algorithm Hash digest
SHA256 505b538cd20aff880f9fb123362c52ee5b45a0a97716307554fd1ed660234f4d
MD5 78440bac040600f107724700d6f3ba31
BLAKE2b-256 f2d4eeef3966e625473650add810369ffc74040efc60f04d1d9bfb487833c251

See more details on using hashes here.

File details

Details for the file super_selfish-0.0.6-py3-none-any.whl.

File metadata

  • Download URL: super_selfish-0.0.6-py3-none-any.whl
  • Upload date:
  • Size: 35.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.23.0 setuptools/50.3.2 requests-toolbelt/0.9.1 tqdm/4.42.1 CPython/3.7.6

File hashes

Hashes for super_selfish-0.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 85add8c1436922c753215f4538c41a63d5bbf40a878c03fbecc5549163d19dfe
MD5 d61ff192688c8faa5605db6499a38cfd
BLAKE2b-256 43b2fa281e2a37c7b7dfa1b19bc2207b5e090e3e12402a95742b9d5efc7943ac

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page