A library for running multiview autoencoder models
Project description
multi-view-AE
is a collection of multi-modal autoencoder models for learning joint representations from multiple modalities of data. The package is structured such that all models have fit
, predict_latents
and predict_reconstruction
methods. All models are built in Pytorch and Pytorch-Lightning.
For more information on implemented models and how to use the package, please see the documentation.
Library schematic
Models Implemented
Below is a table with the models contained within this repository and links to the original papers.
Model class | Model name | Number of views | Original work |
---|---|---|---|
mcVAE | Multi-Channel Variational Autoencoder (mcVAE) | >=1 | link |
AE | Multi-view Autoencoder | >=1 | |
mAAE | Multi-view Adversarial Autoencoder | >=1 | |
DVCCA | Deep Variational CCA | 2 | link |
mWAE | Multi-view Adversarial Autoencoder with a wasserstein loss | >=1 | |
mmVAE | Variational mixture-of-experts autoencoder (MMVAE) | >=1 | link |
mVAE | Multimodal Variational Autoencoder (MVAE) | >=1 | link |
me_mVAE | Multimodal Variational Autoencoder (MVAE) with separate ELBO terms for each view | >=1 | link |
JMVAE | Joint Multimodal Variational Autoencoder(JMVAE-kl) | 2 | link |
MVTCAE | Multi-View Total Correlation Auto-Encoder (MVTCAE) | >=1 | link |
MoPoEVAE | Mixture-of-Products-of-Experts VAE | >=1 | link |
mmJSD | Multimodal Jensen-Shannon divergence model (mmJSD) | >=1 | link |
weighted_mVAE | Generalised Product-of-Experts Variational Autoencoder (gPoE-MVAE) | >=1 | link |
VAE_barlow | Multi-view Variational Autoencoder with barlow twins loss between latents. | 2 | link,link |
AE_barlow | Multi-view Autoencoder with barlow twins loss between latents. | 2 | link,link |
DMVAE | Disentangled multi-modal variational autoencoder | >=1 | link |
weighted_DMVAE | Disentangled multi-modal variational autoencoder with gPoE joint posterior | >=1 | |
mmVAEPlus | Mixture-of-experts multimodal VAE Plus (mmVAE+) | >=1 | link |
Installation
To install our package via pip
:
pip install multiviewae
Or, clone this repository and move to folder:
git clone https://github.com/alawryaguila/multi-view-AE
cd multi-view-AE
Create the customised python environment:
conda create --name mvae python=3.9
Activate python environment:
conda activate mvae
Install the multi-view-AE
package:
pip install ./
Citation
If you have used multi-view-AE
in your research, please consider citing our JOSS paper:
Aguila et al., (2023). Multi-view-AE: A Python package for multi-view autoencoder models. Journal of Open Source Software, 8(85), 5093, https://doi.org/10.21105/joss.05093
Bibtex entry:
@article{Aguila2023,
doi = {10.21105/joss.05093},
url = {https://doi.org/10.21105/joss.05093},
year = {2023},
publisher = {The Open Journal},
volume = {8},
number = {85},
pages = {5093},
author = {Ana Lawry Aguila and Alejandra Jayme and Nina Montaña-Brown and Vincent Heuveline and Andre Altmann},
title = {Multi-view-AE: A Python package for multi-view autoencoder models}, journal = {Journal of Open Source Software}
}
Contribution guidelines
Contribution guidelines are available at https://multi-view-ae.readthedocs.io/en/latest/
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for multiviewae-1.1.7-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5796caa5688f3c2d77c62d7d89e02720a4a3ffea9bc6205a9e581d51228626bb |
|
MD5 | ba8f30ab63e0a7ca7a9fa5d54c3ff45d |
|
BLAKE2b-256 | e485952297c5c32f4eaf47df631fa972cf1559869d93f921d9076c034692593b |