Skip to main content

A library for running multiview autoencoder models

Project description

Multi-modal representation learning using autoencoders

Build Status Documentation Status Python Version DOI version codecov downloads

multi-view-AE is a collection of multi-modal autoencoder models for learning joint representations from multiple modalities of data. The package is structured such that all models have fit, predict_latents and predict_reconstruction methods. All models are built in Pytorch and Pytorch-Lightning.

Many of the models implemented in the multi-view-AE library have been benchmarked against previous implementations, with equal or improved results. See below for more details.

For more information on implemented models and how to use the package, please see the documentation.

Library schematic

Models Implemented

Below is a table with the models contained within this repository and links to the original papers.

Model class Model name Number of views Original work
mcVAE Multi-Channel Variational Autoencoder (mcVAE) >=1 link
AE Multi-view Autoencoder >=1
mAAE Multi-view Adversarial Autoencoder >=1
DVCCA Deep Variational CCA 2 link
mWAE Multi-view Adversarial Autoencoder with a wasserstein loss >=1
mmVAE Variational mixture-of-experts autoencoder (MMVAE) >=1 link
mVAE Multimodal Variational Autoencoder (MVAE) >=1 link
me_mVAE Multimodal Variational Autoencoder (MVAE) with separate ELBO terms for each view >=1 link
JMVAE Joint Multimodal Variational Autoencoder(JMVAE-kl) 2 link
MVTCAE Multi-View Total Correlation Auto-Encoder (MVTCAE) >=1 link
MoPoEVAE Mixture-of-Products-of-Experts VAE >=1 link
mmJSD Multimodal Jensen-Shannon divergence model (mmJSD) >=1 link
weighted_mVAE Generalised Product-of-Experts Variational Autoencoder (gPoE-MVAE) >=1 link
DMVAE Disentangled multi-modal variational autoencoder >=1 link
weighted_DMVAE Disentangled multi-modal variational autoencoder with gPoE joint posterior >=1
mmVAEPlus Mixture-of-experts multimodal VAE Plus (mmVAE+) >=1 link

Installation

To install our package via pip:

pip install multiviewae

Or, clone this repository and move to folder:

git clone https://github.com/alawryaguila/multi-view-AE
cd multi-view-AE

Create the customised python environment:

conda create --name mvae python=3.9

Activate python environment:

conda activate mvae

Install the multi-view-AE package:

pip install ./

Benchmarking results

To illustrate the efficacy of the multi-view-AE implementions, we validated some of the implemented models by reproducing a key result of a previous paper. One of the experiments presented in the paper was reproduced using the \texttt{multi-view-AE} implementations using the same network architectures, modelling choices, and training parameters. The code to reproduce the benchmarking experiments is available in the benchmarking folder. We evaluated performance using the joint log likelihood (↑) and conditional coherence accuracy (↑). Summary of the results of the benchmarking experiments using the BinaryMNIST and PolyMNIST datasets:

Model Experiment Metric Paper Paper results multi-view-AE results
JMVAE BinaryMNIST Joint log likelihood link -86.86 -86.76±0.06
me_mVAE BinaryMNIST Joint log likelihood link -86.26 -86.31±0.08
MoPoEVAE PolyMNIST Conditional Coherence accuracy link 63/75/79/81 68/79/83/84
mmJSD PolyMNIST Conditional Coherence accuracy link 69/57/64/67 75/74/78/80
mmVAE PolyMNIST Conditional Coherence accuracy link 71/71/71/71 71/71/71/71
MVTCAE PolyMNIST Conditional Coherence accuracy link 59/77/83/86 64/81/87/90
mmVAEPlus PolyMNIST Conditional Coherence accuracy link 85.2 86.6±0.07

Citation

If you have used multi-view-AE in your research, please consider citing our JOSS paper:

Lawry Aguila et al., (2023). Multi-view-AE: A Python package for multi-view autoencoder models. Journal of Open Source Software, 8(85), 5093, https://doi.org/10.21105/joss.05093

Bibtex entry:

@article{LawryAguila2023, 
doi = {10.21105/joss.05093}, 
url = {https://doi.org/10.21105/joss.05093}, 
year = {2023}, 
publisher = {The Open Journal}, 
volume = {8}, 
number = {85}, 
pages = {5093}, 
author = {Ana Lawry Aguila and Alejandra Jayme and Nina Montaña-Brown and Vincent Heuveline and Andre Altmann}, 
title = {Multi-view-AE: A Python package for multi-view autoencoder models}, journal = {Journal of Open Source Software} 
}

Contribution guidelines

Contribution guidelines are available at https://multi-view-ae.readthedocs.io/en/latest/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

multiviewae-1.1.8.tar.gz (43.0 kB view details)

Uploaded Source

Built Distribution

multiviewae-1.1.8-py3-none-any.whl (73.2 kB view details)

Uploaded Python 3

File details

Details for the file multiviewae-1.1.8.tar.gz.

File metadata

  • Download URL: multiviewae-1.1.8.tar.gz
  • Upload date:
  • Size: 43.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.8.18 Linux/6.2.0-1018-azure

File hashes

Hashes for multiviewae-1.1.8.tar.gz
Algorithm Hash digest
SHA256 590c607b4d0bad817c5070f9dc1b1c4e529230dd403a7fedeaaa2bf8fac3e390
MD5 ca0c635be8fbbcc9fc7e3067601b695d
BLAKE2b-256 f3bf8f857fd4b53cb77820160e963acd76b83f5f12a5341246107d852324a0dc

See more details on using hashes here.

File details

Details for the file multiviewae-1.1.8-py3-none-any.whl.

File metadata

  • Download URL: multiviewae-1.1.8-py3-none-any.whl
  • Upload date:
  • Size: 73.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.8.18 Linux/6.2.0-1018-azure

File hashes

Hashes for multiviewae-1.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 33afedefdf89d4a8c3503ac28eddaedbdfd054d5b2142ed74c5844e836f04f87
MD5 8cb6480eb293fd66b731fd593bafa0e4
BLAKE2b-256 22364ba75f5bebac988cecbb2c2931484ee766a381fc2f042e993969220e8066

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page