A library for running multiview autoencoder models
Project description
Multi-view-AE: Multi-modal representation learning using autoencoders
multi-view-AE
is a collection of multi-modal autoencoder models for learning joint representations from multiple modalities of data. The package is structured such that all models have fit
, predict_latents
and predict_reconstruction
methods. All models are built in Pytorch and Pytorch-Lightning.
For more information on implemented models and how to use the package, please see the documentation.
Models Implemented
Below is a table with the models contained within this repository and links to the original papers.
Model class | Model name | Number of views | Original work |
---|---|---|---|
mcVAE | Multi-Channel Variational Autoencoder (mcVAE) | >=1 | link |
AE | Multi-view Autoencoder | >=1 | |
AAE | Multi-view Adversarial Autoencoder with separate latent representations | >=1 | |
DVCCA | Deep Variational CCA | 2 | link |
jointAAE | Multi-view Adversarial Autoencoder with joint latent representation | >=1 | |
wAAE | Multi-view Adversarial Autoencoder with joint latent representation and wasserstein loss | >=1 | |
mmVAE | Variational mixture-of-experts autoencoder (MMVAE) | >=1 | link |
mVAE | Multimodal Variational Autoencoder (MVAE) | >=1 | link |
me_mVAE | Multimodal Variational Autoencoder (MVAE) with separate ELBO terms for each view | >=1 | link |
JMVAE | Joint Multimodal Variational Autoencoder(JMVAE-kl) | 2 | link |
MVTCAE | Multi-View Total Correlation Auto-Encoder (MVTCAE) | >=1 | link |
MoPoEVAE | Mixture-of-Products-of-Experts VAE | >=1 | link |
mmJSD | Multimodal Jensen-Shannon divergence model (mmJSD) | >=1 | link |
weighted_mVAE | Generalised Product-of-Experts Variational Autoencoder (gPoE-MVAE) | >=1 | link |
VAE_barlow | Multi-view Variational Autoencoder with barlow twins loss between latents. | 2 | link,link |
AE_barlow | Multi-view Autoencoder with barlow twins loss between latents. | 2 | link,link |
DMVAE | Disentangled multi-modal variational autoencoder | >=1 | link |
weighted_DMVAE | Disentangled multi-modal variational autoencoder with gPoE joint posterior | >=1 |
Installation
To install our package via pip
:
pip install multiviewae
Or, clone this repository and move to folder:
git clone https://github.com/alawryaguila/multi-view-AE
cd multi-view-AE
Create the customised python environment:
conda create --name mvae python=3.9
Activate python environment:
conda activate mvae
Install the multi-view-AE
package:
pip install ./
Citation
If you have used multi-view-AE
in your research, please consider citing our JOSS paper:
Aguila et al., (2023). Multi-view-AE: A Python package for multi-view autoencoder models. Journal of Open Source Software, 8(85), 5093, https://doi.org/10.21105/joss.05093
Bibtex entry:
@article{Aguila2023,
doi = {10.21105/joss.05093},
url = {https://doi.org/10.21105/joss.05093},
year = {2023},
publisher = {The Open Journal},
volume = {8},
number = {85},
pages = {5093},
author = {Ana Lawry Aguila and Alejandra Jayme and Nina Montaña-Brown and Vincent Heuveline and Andre Altmann},
title = {Multi-view-AE: A Python package for multi-view autoencoder models}, journal = {Journal of Open Source Software}
}
Contribution guidelines
Contribution guidelines are available at https://multi-view-ae.readthedocs.io/en/latest/
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for multiviewae-1.1.5-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 619db7943f0de9a9bb60185b2fe1134ab43726bf9c5fbb09e84bda8d7febe186 |
|
MD5 | 2a32fb0ba57f3b831f95091992fc16e3 |
|
BLAKE2b-256 | fca7509f01d70d71c7aa798b250bcb7753a2ade88226dd3a593fb63c1a6a086d |