Vae disentanglement framework built with pytorch lightning.
Project description
🧶 Disent
A modular disentangled representation learning framework built with PyTorch Lightning
Visit the docs for more info, or browse the releases.
Contributions are welcome!
Table Of Contents
Overview
Disent is a modular disentangled representation learning framework for auto-encoders, built upon PyTorch-Lightning. This framework consists of various composable components that can be used to build and benchmark various disentanglement vision tasks.
The name of the framework is derived from both disentanglement and scientific dissent.
Get started with disent by installing it with $pip install disent
or cloning this repository.
Goals
Disent aims to fill the following criteria:
- Provide high quality, readable, consistent and easily comparable implementations of frameworks
- Highlight difference between framework implementations by overriding hooks and minimising duplicate code
- Use best practice eg.
torch.distributions
- Be extremely flexible & configurable
- Support low memory systems
Citing Disent
Please use the following citation if you use Disent in your own research:
@Misc{Michlo2021Disent,
author = {Nathan Juraj Michlo},
title = {Disent - A modular disentangled representation learning framework for pytorch},
howpublished = {Github},
year = {2021},
url = {https://github.com/nmichlo/disent}
}
Architecture
The disent module structure:
disent.dataset
: dataset wrappers, datasets & sampling strategiesdisent.dataset.data
: raw datasetsdisent.dataset.sampling
: sampling strategies forDisentDataset
when multiple elements are required by frameworks, eg. for triplet lossdisent.dataset.transform
: common data transforms and augmentationsdisent.dataset.wrapper
: wrapped datasets are no longer ground-truth datasets, these may have some elements masked out. We can still unwrap these classes to obtain the original datasets for benchmarking.
disent.frameworks
: frameworks, including Auto-Encoders and VAEsdisent.frameworks.ae
: Auto-Encoder based frameworksdisent.frameworks.vae
: Variational Auto-Encoder based frameworks
disent.metrics
: metrics for evaluating disentanglement using ground truth datasetsdisent.model
: common encoder and decoder models used for VAE researchdisent.nn
: torch components for building models including layers, transforms, losses and general mathsdisent.schedule
: annealing schedules that can be registered to a frameworkdisent.util
: helper classes, functions, callbacks, anything unrelated to a pytorch system/model/framework.
Please Note The API Is Still Unstable ⚠️
Disent is still under active development. Features and APIs are mostly stable but may change! A limited set of tests currently exist which will be expanded upon in time.
Hydra Experiment Directories
Easily run experiments with hydra config, these files
are not available from pip install
.
experiment/run.py
: entrypoint for running basic experiments with hydra configexperiment/config/config.yaml
: main configuration file, this is probably what you want to edit!experiment/config
: root folder for hydra config filesexperiment/util
: various helper code for experiments
Features
Disent includes implementations of modules, metrics and datasets from various papers. Please note that items marked with a "🧵" are introduced in and are unique to disent!
Frameworks
- Unsupervised:
- Weakly Supervised:
- Ada-GVAE
AdaVae(..., average_mode='gvae')
Usually better than the Ada-ML-VAE - Ada-ML-VAE
AdaVae(..., average_mode='ml-vae')
- Ada-GVAE
- Supervised:
Many popular disentanglement frameworks still need to be added, please submit an issue if you have a request for an additional framework.
todo
- FactorVAE
- GroupVAE
- MLVAE
Metrics
- Disentanglement:
Some popular metrics still need to be added, please submit an issue if you wish to add your own, or you have a request.
Datasets
Various common datasets used in disentanglement research are included, with hash verification and automatic chunk-size optimization of underlying hdf5 formats for low-memory disk-based access.
-
Ground Truth:
- Cars3D
- dSprites
- MPI3D
- SmallNORB
- Shapes3D
-
Ground Truth Synthetic:
- 🧵 XYObject: A simplistic version of dSprites with a single square.
- 🧵 XYObjectShaded: Exact same dataset as XYObject, but ground truth factors have a different representation
- 🧵 DSpritesImagenet: Version of DSprite with foreground or background deterministically masked out with tiny-imagenet data
Input Transforms + Input/Target Augmentations
- Input based transforms are supported.
- Input and Target CPU and GPU based augmentations are supported.
Schedules & Annealing
Hyper-parameter annealing is supported through the use of schedules. The currently implemented schedules include:
- Linear Schedule
- Cyclic Schedule
- Cosine Wave Schedule
- Various other wrapper schedules
Examples
Python Example
The following is a basic working example of disent that trains a BetaVAE with a cyclic beta schedule and evaluates the trained model with various metrics.
💾 Basic Example
import os
import pytorch_lightning as pl
import torch
from torch.utils.data import DataLoader
from disent.dataset import DisentDataset
from disent.dataset.data import XYObjectData
from disent.dataset.sampling import SingleSampler
from disent.dataset.transform import ToImgTensorF32
from disent.frameworks.vae import BetaVae
from disent.metrics import metric_dci
from disent.metrics import metric_mig
from disent.model import AutoEncoder
from disent.model.ae import DecoderConv64
from disent.model.ae import EncoderConv64
from disent.schedule import CyclicSchedule
# create the dataset & dataloaders
# - ToImgTensorF32 transforms images from numpy arrays to tensors and performs checks
data = XYObjectData()
dataset = DisentDataset(dataset=data, sampler=SingleSampler(), transform=ToImgTensorF32())
dataloader = DataLoader(dataset=dataset, batch_size=128, shuffle=True, num_workers=os.cpu_count())
# create the BetaVAE model
# - adjusting the beta, learning rate, and representation size.
module = BetaVae(
model=AutoEncoder(
# z_multiplier is needed to output mu & logvar when parameterising normal distribution
encoder=EncoderConv64(x_shape=data.x_shape, z_size=10, z_multiplier=2),
decoder=DecoderConv64(x_shape=data.x_shape, z_size=10),
),
cfg=BetaVae.cfg(
optimizer='adam',
optimizer_kwargs=dict(lr=1e-3),
loss_reduction='mean_sum',
beta=4,
)
)
# cyclic schedule for target 'beta' in the config/cfg. The initial value from the
# config is saved and multiplied by the ratio from the schedule on each step.
# - based on: https://arxiv.org/abs/1903.10145
module.register_schedule(
'beta', CyclicSchedule(
period=1024, # repeat every: trainer.global_step % period
)
)
# train model
# - for 2048 batches/steps
trainer = pl.Trainer(
max_steps=2048, gpus=1 if torch.cuda.is_available() else None, logger=False, checkpoint_callback=False
)
trainer.fit(module, dataloader)
# compute disentanglement metrics
# - we cannot guarantee which device the representation is on
# - this will take a while to run
get_repr = lambda x: module.encode(x.to(module.device))
metrics = {
**metric_dci(dataset, get_repr, num_train=1000, num_test=500, show_progress=True),
**metric_mig(dataset, get_repr, num_train=2000),
}
# evaluate
print('metrics:', metrics)
Visit the docs for more examples!
Hydra Config Example
The entrypoint for basic experiments is experiment/run.py
.
Some configuration will be required, but basic experiments can
be adjusted by modifying the Hydra Config 1.0
files in experiment/config
(Please note that hydra 1.1 is not yet supported).
Modifying the main experiment/config/config.yaml
is all you
need for most basic experiments. The main config file contains
a defaults list with entries corresponding to yaml configuration
files (config options) in the subfolders (config groups) in
experiment/config/<config_group>/<option>.yaml
.
💾 Config Defaults Example
defaults:
# data
- sampling: default__bb
- dataset: xyobject
- augment: none
# system
- framework: adavae_os
- model: vae_conv64
# training
- optimizer: adam
- schedule: beta_cyclic
- metrics: fast
- run_length: short
# logs
- run_callbacks: vis
- run_logging: wandb
# runtime
- run_location: local
- run_launcher: local
- run_action: train
# <rest of config.yaml left out>
...
Easily modify any of these values to adjust how the basic experiment
will be run. For example, change framework: adavae
to framework: betavae
, or
change the dataset from xyobject
to shapes3d
. Add new options by adding new
yaml files in the config group folders.
Weights and Biases is supported by changing run_logging: none
to
run_logging: wandb
. However, you will need to login from the command line. W&B logging supports
visualisations of latent traversals.
Why?
- Created as part of my Computer Science MSc scheduled for completion in 2021.
- I needed custom high quality implementations of various VAE's.
- A pytorch version of disentanglement_lib.
- I didn't have time to wait for Weakly-Supervised Disentanglement Without Compromises to release their code as part of disentanglement_lib. (As of September 2020 it has been released, but has unresolved discrepencies).
- disentanglement_lib still uses outdated Tensorflow 1.0, and the flow of data is unintuitive because of its use of Gin Config.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.