Unifying Generative Multimodel Variational Autoencoders in Pytorch
Project description
This library implements some of the most common Multimodal Variational Autoencoders methods in a unifying framework for effective benchmarking and development. You can find the list of implemented models below. It includes ready to use datasets like MnistSvhn 🔢, CelebA 😎 and PolyMNIST, and the most used metrics : Coherences, Likelihoods and FID. It integrates model monitoring with Wandb and a quick way to save/load model from HuggingFaceHub🤗.
Implemented models
Quickstart
Install the library by running:
pip install multivae
or by cloning the repository:
git clone https://github.com/AgatheSenellart/MultiVae.git
cd MultiVae
pip install .
Cloning the repository gives you access to tutorial notebooks and scripts in the 'example' folder.
Load a dataset easily:
from multivae.data.datasets import MnistSvhn
train_set = train_set = MnistSvhn(data_path='your_data_path', split="train", download=True)
Instantiate your favorite model:
from multivae.models import MVTCAE, MVTCAEConfig
model_config = MVTCAEConfig(
latent_dim=20,
input_dims = {'mnist' : (1,28,28),'svhn' : (3,32,32)})
model = MVTCAE(model_config)
Define a trainer and train the model !
training_config = BaseTrainerConfig(
learning_rate=1e-3,
num_epochs=30
)
trainer = BaseTrainer(
model=model,
train_dataset=train_set,
training_config=training_config,
)
trainer.train()
Documentation and Examples
See https://multivae.readthedocs.io
Several examples are provided in examples/
- and a Getting Started notebook in examples/tutorial_notebooks
.
Table of Contents
- Models available
- Quickstart
- Table of Contents
- Installation
- Usage
- Contribute
- Reproducibility statement
- License
Installation
git clone https://github.com/AgatheSenellart/MultiVae.git
cd MultiVae
pip install .
Usage
Our library allows you to use any of the models with custom configuration, encoders and decoders architectures and datasets easily. See our tutorial Notebook at /examples/tutorial_notebooks/getting_started.ipynb to easily get the gist of principal features.
Contribute
If you want to contribute to the project, for instance by adding models to the library: clone the repository and install it in editable mode by using the -e option
pip install -e .
Reproducibility statement
All implemented models are validated by reproducing a key result of the paper.
License
Apache License 2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.