Skip to main content

Unifying Generative Multimodel Variational Autoencoders in Pytorch

Project description

MultiVae

Python Documentation Status

This library implements some of the most common Multimodal Variational Autoencoders methods in a unifying framework for effective benchmarking and development. You can find the list of implemented models below. For easy benchmarking, we include ready-to-use datasets like MnistSvhn 🔢, CelebA 😎 and PolyMNIST, and metrics modules for computing: Coherences, Likelihoods and FID, Reconstruction metrics and Clustering Metrics. It integrates model monitoring with Wandb and a quick way to save/load model from HuggingFaceHub🤗. To improve joint generation of multimodal samples, we also propose samplers to explore the latent space of your model.

Implemented models

Model Paper Official Implementation
CVAE An introduction to Variational Autoencoders
JMVAE Joint Multimodal Learning with Deep Generative Models link
TELBO Generative Models of Visually Grounded Imagination link
MVAE Multimodal Generative Models for Scalable Weakly-Supervised Learning link
MMVAE Variational Mixture-of-Experts Autoencoders for Multi-Modal Deep Generative Models link
MoPoE Generalized Multimodal ELBO link
MVTCAE Multi-View Representation Learning via Total Correlation Objective link
DMVAE Private-Shared Disentangled Multimodal VAE for Learning of Latent Representations link
JNF Improving Multimodal Joint Variational Autoencoders through Normalizing Flows and Correlation Analysis x
MMVAE + MMVAE+: ENHANCING THE GENERATIVE QUALITY OF MULTIMODAL VAES WITHOUT COMPROMISES link
Nexus Leveraging hierarchy in multimodal generative models for effective cross-modality inference link
CMVAE Deep Generative Clustering with Multimodal Diffusion Variational Autoencoders link
MHVAE Unified Brain MR-Ultrasound Synthesis using Multi-Modal Hierarchical Representations link
CRMVAE Mitigating the Limitations of Multimodal VAEs with Coordination-Based Approach link

Table of Contents

Installation

To get the latest stable release run:

pip install multivae

To get the latest updates from the github repository run:

git clone https://github.com/AgatheSenellart/MultiVae.git
cd MultiVae
pip install .

Cloning the repository also gives you access to the tutorial notebooks and scripts in the 'example' folder.

Quickstart

Here is a very simple code to illustrate how you can use MultiVae:

# Load a dataset 
from multivae.data.datasets import MnistSvhn
train_set = MnistSvhn(data_path='your_data_path', split="train", download=True)


# Instantiate your favorite model:
from multivae.models import MVTCAE, MVTCAEConfig
model_config = MVTCAEConfig(
    latent_dim=20, 
    input_dims = {'mnist' : (1,28,28),'svhn' : (3,32,32)})
model = MVTCAE(model_config)


# Define a trainer and train the model !
from multivae.trainers import BaseTrainer, BaseTrainerConfig
training_config = BaseTrainerConfig(
    learning_rate=1e-3,
    num_epochs=30
)

trainer = BaseTrainer(
    model=model,
    train_dataset=train_set,
    training_config=training_config,
)
trainer.train()

Getting your hands on the code

(Back to top)

Our library allows you to use any of the models with custom configuration, encoders and decoders architectures and datasets easily. To learn how to use MultiVae's features we propose different tutorial notebooks:

  • Getting started : Learn how to provide your own architectures and train a model.
  • Computing Metrics : Learn how to evaluate your model using MultiVae's metrics modules.
  • Learning with partial datasets : Learn how to use the IncompleteDataset class and to train a model on an incomplete dataset.
  • Using samplers: Learn how to train and use sampler to improve the joint generation of synthetic data.
  • Using WandB: Learn how to easily monitor your training/evaluation with Wandb and MultiVae.

Training on incomplete datasets

Many models implemented in the library can be trained on incomplete datasets. To do so, you will need to define a dataset that inherits from MultiVae's IncompleteDataset class.

For a step-by-step tutorial on training on incomplete datasets, see this notebook.

How does MultiVae handles partial data ? We handle partial data by sampling random batchs, artificially filling the missing modalities, and using the mask to compute the final loss.

This allows for unbiased mini-batches. There are other ways to handle missing data (for instance using a batch sampler): don't hesitate to reach out if you would like additional options!

image

For more details on how each model is adapted to the partial view setting, see the model's description in the documentation.

Below is the list of models that can be used on Incomplete datasets:

Model Can be used on Incomplete Datasets Details
CVAE
JMVAE
TELBO
MVAE see here
MMVAE see here
MoPoE see here
MVTCAE see here
DMVAE see here
JNF
MMVAE+ see here
Nexus see here
CMVAE see here
MHVAE see here
CRMVAE see here

Toy datasets with missing values

To ease the development of new methods on incomplete datasets, we propose two easy-to-import toy datasets with missing values:

  • Missing at Random: The PolyMNIST dataset with missing values.
  • Missing not at Random: The MHD dataset with missing ratios that depend on the label.

See the documentation for more information on those datasets.

Metrics

We provide metrics modules that can be used on any MultiVae model for evaluation. See the documentation for minimal code examples and see this notebook for a hands-on tutorial.

Datasets

At the time, we provide 7 ready-to-use multimodal datasets with an automatic download option. Click here to see the options.

Monitoring your training with Wandb

MultiVae allows easy monitoring with Wandb. To use this feature, you will need to install and configure Wandb with the few steps below:

Install Wandb

  1. Install wandb $ pip install wandb
  2. Create a wandb account online
  3. Once you are logged in, go to this page and copy the API key.
  4. In your terminal, enter $ wandb login and then copy your API key when prompted.

Once this is done, you can use wandb features in MultiVae.

Monitor training with Wandb

Below is a minimal example on how to use the WandbCallback to monitor your training. We suppose that you have already defined a model and a train_dataset in that example.

By default, the train loss, eval loss and metrics specific to the model will be logged to wandb. If you set the steps_predict in the trainer config, images of generation will also be logged to wandb.

from multivae.trainers import BaseTrainer, BaseTrainerConfig
from multivae.trainers.base.callbacks import WandbCallback

# Define training configuration
your_training_config = BaseTrainerConfig(
    learning_rate=1e-2, 
    steps_predict=5 # generate samples every 5 steps. Images will be logged to wandb.
    )

# Define the wandb callback
wandb_cb = WandbCallback()
wandb_cb.setup(
    training_config=your_training_config, # will be saved to wandb
    model_config=your_model_config, #will be saved to wandb
    project_name='your_project_name'
)

# Pass the wandb callback to trainer to enable metrics and images logging to wandb
trainer = BaseTrainer(
    model=your_model, 
    train_dataset=train_data,
    callbacks=[wandb_cb] 
)

Logging evaluation metrics to Wandb

The metrics modules of MultiVae can also be used with Wandb, to save all your results in one place.

If you have a trained model, and you want to compute some metrics for that model, you can pass a wandb_path to the metric module to tell it where to log the metrics. If there is already a wandb run that was created during training, you can reuse the same wandb_path to log metrics to that same place. See this documentation to learn how to find your wandb_path or re-create one.

Below is a minimal example with the LikelihoodEvaluator Module but it works the same for all metrics.

from multivae.metrics import LikelihoodsEvaluator, LikelihoodsEvaluatorConfig

ll_config = LikelihoodsEvaluatorConfig(
    batch_size=128,
    num_samples=3, 
    wandb_path= 'your_wandb_path' # Pass your wandb_path here
)

ll_module = LikelihoodsEvaluator(model=your_model,
                                 output='./metrics',# where to log the metrics
                                 test_dataset=test_set,
                                 eval_config=ll_config)

Sharing your models with the HuggingFace Hub 🤗

MultiVae allows you to share your models on the HuggingFace Hub. To do so you need:

  • a valid HuggingFace account
  • the package huggingface_hub installed in your virtual env. If not you can install it with
$ python -m pip install huggingface_hub
  • to be logged in to your HuggingFace account using
$ huggingface-cli login

Uploading a model to the Hub

Any MultiVae model can be easily uploaded using the method push_to_hf_hub

>>> my_model.push_to_hf_hub(hf_hub_path="your_hf_username/your_hf_hub_repo")

Note: If your_hf_hub_repo already exists and is not empty, files will be overridden. In case, the repo your_hf_hub_repo does not exist, a folder having the same name will be created.

Downloading models from the Hub

Equivalently, you can download or reload any MultiVae model directly from the Hub using the method load_from_hf_hub

>>> from multivae.models import AutoModel
>>> my_downloaded_vae = AutoModel.load_from_hf_hub(hf_hub_path="path_to_hf_repo")

Using samplers

All MultiVae's models have a natural way of generating fully synthetic multimodal samples by sampling latent codes from the prior distribution of the model. But it is well known for unimodal VAEs (and the same applies to multimodal VAEs) that generation can be improved by using a more fitting distribution to sample encodings the latent space.

Once you have a trained MultiVae model, you can fit a multivae.sampler to approximate the a posteriori distribution of encodings in the latent space and then use it to produce new samples.

We provide a minimal example on how to fit a GMM sampler but we invite you to check out our tutorial notebook here for a more in-depth explanation on how to use samplers and how to combine them with MultiVae metrics modules.

from multivae.samplers import GaussianMixtureSampler, GaussianMixtureSamplerConfig

config = GaussianMixtureSamplerConfig(
    n_components=10 # number of components to use in the mixture
)

gmm_sampler = GaussianMixtureSampler(model=your_model,
                                 sampler_config=config)

gmm_sampler.fit(train_data) # train_data is the Multimodal Dataset used for training the model.

Note that samplers can be used with all MultiVae models and that they can really improve joint generation. For a taste of what it can do, see the joint generations below for a MVTCAE model trained on PolyMNIST: alt text

Documentation, Examples and Case Studies

We provide a full online documentation at https://multivae.readthedocs.io.

Several examples are provided in examples/ - as well as tutorial notebooks on how to use the main features of MultiVae(training, metrics, samplers) in the folder examples/tutorial_notebooks.

For more advanced examples on how to use MultiVae we provide small case-studies with code and results:

Contribute

(Back to top)

If you want to contribute to the project, for instance by adding models to the library: clone the repository and install it in editable mode by using the -e option

pip install -e .

We propose contributing guidelines here with tutorials on how to implement a new model, sampler, metrics or dataset.

Reproducibility statement

Most implemented models are validated by reproducing a key result of the paper. Here we provide details on the results we managed to reproduce.

Model Dataset Metrics Paper Ours
JMVAE Mnist Likelihood -86.86 -86.85 +- 0.03
MMVAE MnistSVHN Coherences 86/69/42 88/67/41
MVAE Mnist ELBO 188.8 188.3 +-0.4
DMVAE MnistSVHN Coherences 88.1/83.7/44.7 89.2/81.3/46.0
MoPoE PolyMNIST Coherences 66/77/81/83 67/79/84/85
MVTCAE PolyMNIST Coherences 69/77/83/86 64/82/88/91
MMVAE+ PolyMNIST Coherences/FID 86.9/92.81 88.6 +-0;8/ 93+-5
CMVAE PolyMNIST Coherences 89.7/78.1 88.6/76.4
CRMVAE Translated PolyMNIST Coherences 0.145/0.172/0.192/0.21 0.16/0.19/0.205/0.21

Note that we also tried to reproduce results for the Nexus model, but didn't obtain similar results as the ones presented in the original paper. If you spot a difference between our implementation and theirs, please reach out to us.

Citation

(Back to top)

If you have used our package in your research, please consider citing our paper presenting the package :

MultiVae : A Python library for Multimodal Generative Autoencoders (2023, Agathe Senellart, Clément Chadebec and Stéphanie Allassonnière)

Bibtex entry :

@preprint{senellart:hal-04207151,
  TITLE = {{MultiVae: A Python library for Multimodal Generative Autoencoders}},
  AUTHOR = {Senellart, Agathe and Chadebec, Clement and Allassonniere, Stephanie},
  URL = {https://hal.science/hal-04207151},
  YEAR = {2023},
}

Issues ? Questions ?

If you encounter any issues using our package or if you would like to request features, don't hesitate to open an issue here and we will do our best to fix it !

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

multivae-1.0.0.tar.gz (195.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

multivae-1.0.0-py3-none-any.whl (208.3 kB view details)

Uploaded Python 3

File details

Details for the file multivae-1.0.0.tar.gz.

File metadata

  • Download URL: multivae-1.0.0.tar.gz
  • Upload date:
  • Size: 195.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.16

File hashes

Hashes for multivae-1.0.0.tar.gz
Algorithm Hash digest
SHA256 a3291103f73ae0192d1a48b7555d6c2fc3f558eaaa0ee75228edb3c19cae6ad5
MD5 ca3c6d6cebbbdda94abdfecc2a45b66a
BLAKE2b-256 7d679ebb31ce6c27a98799bf73c0baa72e26ce840a17b6dd2f251a85386b598e

See more details on using hashes here.

File details

Details for the file multivae-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: multivae-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 208.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.16

File hashes

Hashes for multivae-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c307c8cfd090a927de32f613da66a8ad1c4f74c25ff1ec3417fd13d230b87223
MD5 8d09f475cd201580d2a38ea668be75f1
BLAKE2b-256 5ddb414ad60e1ddab33bd8b2db8bc9606d53047074b8ce029ccbaeac90b0267f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page