Skip to main content

A library for running multiview autoencoder models

Project description

Build Status Documentation Status Python Version DOI version codecov

Multi-view-AE: Multi-modal representation learning using autoencoders

multi-view-AE is a collection of multi-modal autoencoder models for learning joint representations from multiple modalities of data. The package is structured such that all models have fit, predict_latents and predict_reconstruction methods. All models are built in Pytorch and Pytorch-Lightning.

For more information on implemented models and how to use the package, please see the documentation.

Models Implemented

Below is a table with the models contained within this repository and links to the original papers.

Model class Model name Number of views Original work
mcVAE Multi-Channel Variational Autoencoder (mcVAE) >=1 link
AE Multi-view Autoencoder >=1
AAE Multi-view Adversarial Autoencoder with separate latent representations >=1
DVCCA Deep Variational CCA 2 link
jointAAE Multi-view Adversarial Autoencoder with joint latent representation >=1
wAAE Multi-view Adversarial Autoencoder with joint latent representation and wasserstein loss >=1
mmVAE Variational mixture-of-experts autoencoder (MMVAE) >=1 link
mVAE Multimodal Variational Autoencoder (MVAE) >=1 link
me_mVAE Multimodal Variational Autoencoder (MVAE) with separate ELBO terms for each view >=1 link
JMVAE Joint Multimodal Variational Autoencoder(JMVAE-kl) 2 link
MVTCAE Multi-View Total Correlation Auto-Encoder (MVTCAE) >=1 link
MoPoEVAE Mixture-of-Products-of-Experts VAE >=1 link
mmJSD Multimodal Jensen-Shannon divergence model (mmJSD) >=1 link
weighted_mVAE Generalised Product-of-Experts Variational Autoencoder (gPoE-MVAE) >=1 link
VAE_barlow Multi-view Variational Autoencoder with barlow twins loss between latents. 2 link,link
AE_barlow Multi-view Autoencoder with barlow twins loss between latents. 2 link,link
DMVAE Disentangled multi-modal variational autoencoder >=1 link
weighted_DMVAE Disentangled multi-modal variational autoencoder with gPoE joint posterior >=1

Installation

To install our package via pip:

pip install multiviewae

Or, clone this repository and move to folder:

git clone https://github.com/alawryaguila/multi-view-AE
cd multi-view-AE

Create the customised python environment:

conda create --name mvae python=3.9

Activate python environment:

conda activate mvae

Install the multi-view-AE package:

pip install ./

Citation

If you have used multi-view-AE in your research, please consider citing our JOSS paper:

Aguila et al., (2023). Multi-view-AE: A Python package for multi-view autoencoder models. Journal of Open Source Software, 8(85), 5093, https://doi.org/10.21105/joss.05093

Bibtex entry:

@article{Aguila2023, 
doi = {10.21105/joss.05093}, 
url = {https://doi.org/10.21105/joss.05093}, 
year = {2023}, 
publisher = {The Open Journal}, 
volume = {8}, 
number = {85}, 
pages = {5093}, 
author = {Ana Lawry Aguila and Alejandra Jayme and Nina Montaña-Brown and Vincent Heuveline and Andre Altmann}, 
title = {Multi-view-AE: A Python package for multi-view autoencoder models}, journal = {Journal of Open Source Software} 
}

Contribution guidelines

Contribution guidelines are available at https://multi-view-ae.readthedocs.io/en/latest/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

multiviewae-1.1.3.tar.gz (38.5 kB view details)

Uploaded Source

Built Distribution

multiviewae-1.1.3-py3-none-any.whl (69.3 kB view details)

Uploaded Python 3

File details

Details for the file multiviewae-1.1.3.tar.gz.

File metadata

  • Download URL: multiviewae-1.1.3.tar.gz
  • Upload date:
  • Size: 38.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.8.18 Linux/6.2.0-1011-azure

File hashes

Hashes for multiviewae-1.1.3.tar.gz
Algorithm Hash digest
SHA256 276b66272ef6a3e0f840734e2ca0fe5e157e6c9b930d2e4618675cd45f5d3653
MD5 dabbab4da2801213249751f7a9ead8f1
BLAKE2b-256 f1281e270eab31d04ef8a754cb7a4efd9ad86c270c5368ce28228444d7f2eb1d

See more details on using hashes here.

File details

Details for the file multiviewae-1.1.3-py3-none-any.whl.

File metadata

  • Download URL: multiviewae-1.1.3-py3-none-any.whl
  • Upload date:
  • Size: 69.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.8.18 Linux/6.2.0-1011-azure

File hashes

Hashes for multiviewae-1.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 0c7dccc4afb0b173d207feb5bec74ac60f7223cb7363eb9401e201470fc1397f
MD5 a20dda9be28091d264431545ee158327
BLAKE2b-256 5779232d0f4988548a91fd6e75f448bf0f1ca4acb183db3489cec3df6789ab93

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page