Unifying Generative Multimodel Variational Autoencoders in Pytorch
Project description
This library implements some of the most common Multimodal Variational Autoencoders methods in a unifying framework for effective benchmarking and development. You can find the list of implemented models below. For easy benchmarking, we include ready-to-use datasets like MnistSvhn 🔢, CelebA 😎 and PolyMNIST, and metrics modules for computing: Coherences, Likelihoods and FID, Reconstruction metrics and Clustering Metrics. It integrates model monitoring with Wandb and a quick way to save/load model from HuggingFaceHub🤗. To improve joint generation of multimodal samples, we also propose samplers to explore the latent space of your model.
Implemented models
Table of Contents
- Models available
- Installation
- Training a Model
- Tutorials
- Training on an incomplete dataset
- Available metrics / Available datasets
- Monitor training with Wandb / Model sharing with HuggingFaceHub
- Using samplers for improved joint generation
- Documentation and Case-Studies
- Reproducibility
- Contribute
- Citation
Installation
To get the latest stable release run:
pip install multivae
To get the latest updates from the github repository run:
git clone https://github.com/AgatheSenellart/MultiVae.git
cd MultiVae
pip install .
Cloning the repository also gives you access to the tutorial notebooks and scripts in the 'example' folder.
Quickstart
Here is a very simple code to illustrate how you can use MultiVae:
# Load a dataset
from multivae.data.datasets import MnistSvhn
train_set = MnistSvhn(data_path='./data', split="train", download=True)
# Instantiate your favorite model:
from multivae.models import MVTCAE, MVTCAEConfig
model_config = MVTCAEConfig(
n_modalities=2,
latent_dim=20,
input_dims = {'mnist' : (1,28,28),'svhn' : (3,32,32)}
)
model = MVTCAE(model_config)
# Define a trainer and train the model !
from multivae.trainers import BaseTrainer, BaseTrainerConfig
training_config = BaseTrainerConfig(
learning_rate=1e-3,
num_epochs=10
)
trainer = BaseTrainer(
model=model,
train_dataset=train_set,
training_config=training_config,
)
trainer.train()
Getting your hands on the code
Our library allows you to use any of the models with custom configuration, encoders and decoders architectures and datasets easily. To learn how to use MultiVae's features we propose different tutorial notebooks:
- Getting started : Learn how to provide your own architectures and train a model.
- Computing Metrics : Learn how to evaluate your model using MultiVae's metrics modules.
- Learning with partial datasets : Learn how to use the IncompleteDataset class and to train a model on an incomplete dataset.
- Using samplers: Learn how to train and use sampler to improve the joint generation of synthetic data.
- Using WandB: Learn how to easily monitor your training/evaluation with Wandb and MultiVae.
Training on incomplete datasets
Many models implemented in the library can be trained on incomplete datasets. To do so, you will need to define a dataset that inherits from MultiVae's IncompleteDataset class.
For a step-by-step tutorial on training on incomplete datasets, see this notebook.
How does MultiVae handles partial data ? We handle partial data by sampling random batchs, artificially filling the missing modalities, and using the mask to compute the final loss.
This allows for unbiased mini-batches. There are other ways to handle missing data (for instance using a batch sampler): don't hesitate to reach out if you would like additional options!
For more details on how each model is adapted to the partial view setting, see the model's description in the documentation.
Below is the list of models that can be used on Incomplete datasets:
| Model | Can be used on Incomplete Datasets | Details |
|---|---|---|
| CVAE | ❌ | |
| JMVAE | ❌ | |
| TELBO | ❌ | |
| MVAE | ✅ | see here |
| MMVAE | ✅ | see here |
| MoPoE | ✅ | see here |
| MVTCAE | ✅ | see here |
| DMVAE | ✅ | see here |
| JNF | ❌ | |
| MMVAE+ | ✅ | see here |
| Nexus | ✅ | see here |
| CMVAE | ✅ | see here |
| MHVAE | ✅ | see here |
| CRMVAE | ✅ | see here |
Toy datasets with missing values
To ease the development of new methods on incomplete datasets, we propose two easy-to-import toy datasets with missing values:
- Missing at Random: The PolyMNIST dataset with missing values.
- Missing not at Random: The MHD dataset with missing ratios that depend on the label.
See the documentation for more information on those datasets.
Metrics
We provide metrics modules that can be used on any MultiVae model for evaluation. See the documentation for minimal code examples and see this notebook for a hands-on tutorial.
Datasets
At the time, we provide 7 ready-to-use multimodal datasets with an automatic download option. Click here to see the options.
Monitoring your training with Wandb
MultiVae allows easy monitoring with Wandb. To use this feature, you will need to install and configure Wandb with the few steps below:
Install Wandb
- Install wandb
$ pip install wandb - Create a wandb account online
- Once you are logged in, go to this page and copy the API key.
- In your terminal, enter
$ wandb loginand then copy your API key when prompted.
Once this is done, you can use wandb features in MultiVae.
Monitor training with Wandb
Below is a minimal example on how to use the WandbCallback to monitor your training. We suppose that you have already defined a model and a train_dataset in that example.
By default, the train loss, eval loss and metrics specific to the model will be logged to wandb. If you set the steps_predict in the trainer config, images of generation will also
be logged to wandb.
from multivae.trainers import BaseTrainer, BaseTrainerConfig
from multivae.trainers.base.callbacks import WandbCallback
# Define training configuration
your_training_config = BaseTrainerConfig(
learning_rate=1e-2,
steps_predict=5 # generate samples every 5 steps. Images will be logged to wandb.
)
# Define the wandb callback
wandb_cb = WandbCallback()
wandb_cb.setup(
training_config=your_training_config, # will be saved to wandb
model_config=your_model_config, #will be saved to wandb
project_name='your_project_name'
)
# Pass the wandb callback to trainer to enable metrics and images logging to wandb
trainer = BaseTrainer(
model=your_model,
train_dataset=train_data,
callbacks=[wandb_cb]
)
Logging evaluation metrics to Wandb
The metrics modules of MultiVae can also be used with Wandb, to save all your results in one place.
If you have a trained model, and you want to compute some metrics for that model, you can pass a wandb_path to the metric module to tell it where to log the metrics. If there is already a wandb run that was created during training, you can reuse the same wandb_path to log metrics to that same place. See this documentation to learn how to find your wandb_path or re-create one.
Below is a minimal example with the LikelihoodEvaluator Module but it works the same for all metrics.
from multivae.metrics import LikelihoodsEvaluator, LikelihoodsEvaluatorConfig
ll_config = LikelihoodsEvaluatorConfig(
batch_size=128,
num_samples=3,
wandb_path= 'your_wandb_path' # Pass your wandb_path here
)
ll_module = LikelihoodsEvaluator(model=your_model,
output='./metrics',# where to log the metrics
test_dataset=test_set,
eval_config=ll_config)
Sharing your models with the HuggingFace Hub 🤗
MultiVae allows you to share your models on the HuggingFace Hub. To do so you need:
- a valid HuggingFace account
- the package
huggingface_hubinstalled in your virtual env. If not you can install it with
$ python -m pip install huggingface_hub
- to be logged in to your HuggingFace account using
$ huggingface-cli login
Uploading a model to the Hub
Any MultiVae model can be easily uploaded using the method push_to_hf_hub
>>> my_model.push_to_hf_hub(hf_hub_path="your_hf_username/your_hf_hub_repo")
Note: If your_hf_hub_repo already exists and is not empty, files will be overridden. In case,
the repo your_hf_hub_repo does not exist, a folder having the same name will be created.
Downloading models from the Hub
Equivalently, you can download or reload any MultiVae model directly from the Hub using the method load_from_hf_hub
>>> from multivae.models import AutoModel
>>> my_downloaded_vae = AutoModel.load_from_hf_hub(hf_hub_path="path_to_hf_repo")
Using samplers
All MultiVae's models have a natural way of generating fully synthetic multimodal samples by sampling latent codes from the prior distribution of the model. But it is well known for unimodal VAEs (and the same applies to multimodal VAEs) that generation can be improved by using a more fitting distribution to sample encodings the latent space.
Once you have a trained MultiVae model, you can fit a multivae.sampler to approximate the a posteriori distribution of encodings in the latent space and then use it to produce new samples.
We provide a minimal example on how to fit a GMM sampler but we invite you to check out our tutorial notebook here for a more in-depth explanation on how to use samplers and how to combine them with MultiVae metrics modules.
from multivae.samplers import GaussianMixtureSampler, GaussianMixtureSamplerConfig
config = GaussianMixtureSamplerConfig(
n_components=10 # number of components to use in the mixture
)
gmm_sampler = GaussianMixtureSampler(model=your_model,
sampler_config=config)
gmm_sampler.fit(train_data) # train_data is the Multimodal Dataset used for training the model.
Note that samplers can be used with all MultiVae models and that they can really improve joint generation. For a taste of what it can do, see the joint generations below for a MVTCAE model trained on PolyMNIST:
Documentation, Examples and Case Studies
We provide a full online documentation at https://multivae.readthedocs.io.
Several examples are provided in examples/ - as well as tutorial notebooks on how to use the main features of MultiVae(training, metrics, samplers) in the folder examples/tutorial_notebooks.
For more advanced examples on how to use MultiVae we provide small case-studies with code and results:
-
Benchmarking models on the incomplete PolyMNIST dataset. We compare several models on a incomplete version of PolyMNIST and analyse the results thoroughly with coherences, FIDs, clustering metrics and different samplers. Check out the results to get an intuition about the different models !
Contribute
If you want to contribute to the project, for instance by adding models to the library: clone the repository and install it in editable mode by using the -e option
pip install -e .
We propose contributing guidelines here with tutorials on how to implement a new model, sampler, metrics or dataset.
Reproducibility statement
Most implemented models are validated by reproducing a key result of the paper. Here we provide details on the results we managed to reproduce.
| Model | Dataset | Metrics | Paper | Ours |
|---|---|---|---|---|
| JMVAE | Mnist | Likelihood | -86.86 | -86.85 +- 0.03 |
| MMVAE | MnistSVHN | Coherences | 86/69/42 | 88/67/41 |
| MVAE | Mnist | ELBO | 188.8 | 188.3 +-0.4 |
| DMVAE | MnistSVHN | Coherences | 88.1/83.7/44.7 | 89.2/81.3/46.0 |
| MoPoE | PolyMNIST | Coherences | 66/77/81/83 | 67/79/84/85 |
| MVTCAE | PolyMNIST | Coherences | 69/77/83/86 | 64/82/88/91 |
| MMVAE+ | PolyMNIST | Coherences/FID | 86.9/92.81 | 88.6 +-0;8/ 93+-5 |
| CMVAE | PolyMNIST | Coherences | 89.7/78.1 | 88.6/76.4 |
| CRMVAE | Translated PolyMNIST | Coherences | 0.145/0.172/0.192/0.21 | 0.16/0.19/0.205/0.21 |
Note that we also tried to reproduce results for the Nexus model, but didn't obtain similar results as the ones presented in the original paper. If you spot a difference between our implementation and theirs, please reach out to us.
Citation
If you have used our package in your research, please cite our JOSS paper: MultiVae: A Python package for Multimodal Variational Autoencoders on Partial Datasets.
You can find the bibtex citation below:
@article{Senellart2025, doi = {10.21105/joss.07996}, url = {https://doi.org/10.21105/joss.07996}, year = {2025}, publisher = {The Open Journal}, volume = {10}, number = {110}, pages = {7996}, author = {Agathe Senellart and Clément Chadebec and Stéphanie Allassonnière}, title = {MultiVae: A Python package for Multimodal Variational Autoencoders on Partial Datasets.}, journal = {Journal of Open Source Software} }
Issues ? Questions ?
If you encounter any issues using our package or if you would like to request features, don't hesitate to open an issue here and we will do our best to fix it !
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file multivae-1.0.3.tar.gz.
File metadata
- Download URL: multivae-1.0.3.tar.gz
- Upload date:
- Size: 199.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.10.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
975baffc6867dbfcd2fccc68439efc63de9ce1b6f131968f8bd9dc069652c9e2
|
|
| MD5 |
35c28e1e7a249a72a08feb9c2fc06ae2
|
|
| BLAKE2b-256 |
1438a96031ffffce211c59f3658e7876dac7140ac0f8f4f02500e3bd3b3ec5d6
|
File details
Details for the file multivae-1.0.3-py3-none-any.whl.
File metadata
- Download URL: multivae-1.0.3-py3-none-any.whl
- Upload date:
- Size: 213.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.10.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d4f17a888666522636e1d8dd2d87e6d7ab2b6f6081971a78bbc19f688d037400
|
|
| MD5 |
e6467d5bb336dc9515c85d42b50c5890
|
|
| BLAKE2b-256 |
9cb6e256f7763502ed2956318e0682aaff45f8d6c0b2ce6d8548fd54ef867842
|