PyTorch Multimodal Library
Project description
TorchMultimodal (Beta Release)
Models | Example scripts | Getting started | Code overview | Installation | Contributing | License
Introduction
TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale, including both content understanding and generative models. TorchMultimodal contains:
- A repository of modular and composable building blocks (fusion layers, loss functions, datasets and utilities).
- A collection of common multimodal model classes built up from said building blocks with pretrained weights for canonical configurations.
- A set of examples that show how to combine these building blocks with components and common infrastructure from across the PyTorch Ecosystem to replicate state-of-the-art models published in the literature. These examples should serve as baselines for ongoing research in the field, as well as a starting point for future work.
Models
TorchMultimodal contains a number of models, including
- ALBEF: model class, paper
- BLIP-2: model class, paper
- CLIP: model class, paper
- CoCa: model class, paper
- DALL-E 2: model, paper
- FLAVA: model class, paper
- MAE/Audio MAE: model class, MAE paper, Audio MAE paper
- MDETR: model class, paper
Example scripts
In addition to the above models, we provide example scripts for training, fine-tuning, and evaluation of models on popular multimodal tasks. Examples can be found under examples/ and include
Model | Supported Tasks |
---|---|
ALBEF | Retrieval Visual Question Answering |
DDPM | Training and Inference (notebook) |
FLAVA | Pretraining Fine-tuning Zero-shot |
MDETR | Phrase grounding Visual Question Answering |
MUGEN | Text-to-video retrieval Text-to-video generation |
Omnivore | Pre-training Evaluation |
Getting started
Below we give minimal examples of how you can write a simple training or zero-shot evaluation script using components from TorchMultimodal.
FLAVA zero-shot example
import torch
from PIL import Image
from torchmultimodal.models.flava.model import flava_model
from torchmultimodal.transforms.bert_text_transform import BertTextTransform
from torchmultimodal.transforms.flava_transform import FLAVAImageTransform
# Define helper function for zero-shot prediction
def predict(zero_shot_model, image, labels):
zero_shot_model.eval()
with torch.no_grad():
image = image_transform(img)["image"].unsqueeze(0)
texts = text_transform(labels)
_, image_features = zero_shot_model.encode_image(image, projection=True)
_, text_features = zero_shot_model.encode_text(texts, projection=True)
scores = image_features @ text_features.t()
probs = torch.nn.Softmax(dim=-1)(scores)
label = labels[torch.argmax(probs)]
print(
"Label probabilities: ",
{labels[i]: probs[:, i] for i in range(len(labels))},
)
print(f"Predicted label: {label}")
image_transform = FLAVAImageTransform(is_train=False)
text_transform = BertTextTransform()
zero_shot_model = flava_model(pretrained=True)
img = Image.open("my_image.jpg") # point to your own image
predict(zero_shot_model, img, ["dog", "cat", "house"])
# Example output:
# Label probabilities: {'dog': tensor([0.80590]), 'cat': tensor([0.0971]), 'house': tensor([0.0970])}
# Predicted label: dog
MAE training example
import torch
from torch.utils.data import DataLoader
from torchmultimodal.models.masked_auto_encoder.model import vit_l_16_image_mae
from torchmultimodal.models.masked_auto_encoder.utils import (
CosineWithWarmupAndLRScaling,
)
from torchmultimodal.modules.losses.reconstruction_loss import ReconstructionLoss
from torchmultimodal.transforms.mae_transform import ImagePretrainTransform
mae_transform = ImagePretrainTransform()
dataset = MyDatasetClass(transforms=mae_transform) # you should define this
dataloader = DataLoader(dataset, batch_size=8)
# Instantiate model and loss
mae_model = vit_l_16_image_mae()
mae_loss = ReconstructionLoss()
# Define optimizer and lr scheduler
optimizer = torch.optim.AdamW(mae_model.parameters())
lr_scheduler = CosineWithWarmupAndLRScaling(
optimizer, max_iters=1000, warmup_iters=100 # you should set these
)
# Train one epoch
for batch in dataloader:
model_out = mae_model(batch["images"])
loss = mae_loss(model_out.decoder_pred, model_out.label_patches, model_out.mask)
loss.backward()
optimizer.step()
lr_scheduler.step()
Code overview
torchmultimodal/diffusion_labs
diffusion_labs contains components for building diffusion models. For more details on these components, see diffusion_labs/README.md.
torchmultimodal/models
Look here for model classes as well as any other modeling code specific to a given architecture. E.g. the directory torchmultimodal/models/blip2 contains modeling components specific to BLIP-2.
torchmultimodal/modules
Look here for common generic building blocks that can be stitched together to build a new architecture. This includes layers like codebooks, patch embeddings, or transformer encoder/decoders, losses like contrastive loss with temperature or reconstruction loss, encoders like ViT and BERT, and fusion modules like Deep Set fusion.
torchmultimodal/transforms
Look here for common data transforms from popular models, e.g. CLIP, FLAVA, and MAE.
Installation
TorchMultimodal requires Python >= 3.8. The library can be installed with or without CUDA support. The following assumes conda is installed.
Prerequisites
-
Install conda environment
conda create -n torch-multimodal python=\ conda activate torch-multimodal
-
Install pytorch, torchvision, and torchaudio. See PyTorch documentation.
# Use the current CUDA version as seen [here](https://pytorch.org/get-started/locally/) # Select the nightly Pytorch build, Linux as the OS, and conda. Pick the most recent CUDA version. conda install pytorch torchvision torchaudio pytorch-cuda=\ -c pytorch-nightly -c nvidia # For CPU-only install conda install pytorch torchvision torchaudio cpuonly -c pytorch-nightly
Install from binaries
Nightly binary on Linux for Python 3.8 and 3.9 can be installed via pip wheels. For now we only support Linux platform through PyPI.
python -m pip install torchmultimodal-nightly
Building from Source
Alternatively, you can also build from our source code and run our examples:
git clone --recursive https://github.com/facebookresearch/multimodal.git multimodal
cd multimodal
pip install -e .
For developers please follow the development installation.
Contributing
We welcome any feature requests, bug reports, or pull requests from the community. See the CONTRIBUTING file for how to help out.
License
TorchMultimodal is BSD licensed, as found in the LICENSE file.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distributions
File details
Details for the file torchmultimodal_nightly-2024.1.13-py39-none-any.whl
.
File metadata
- Download URL: torchmultimodal_nightly-2024.1.13-py39-none-any.whl
- Upload date:
- Size: 256.4 kB
- Tags: Python 3.9
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c45267c51f2764c69111d264fed50ffe0ee81f3e1da5ba156b033d454a6fa0bd |
|
MD5 | 3cef52e6776e8ce2bd3bcd3f657f1de4 |
|
BLAKE2b-256 | 58a702f2bd16fe270f347d1910dc0ac14c3ef458e4955a576e3b78459ab64fd5 |
File details
Details for the file torchmultimodal_nightly-2024.1.13-py38-none-any.whl
.
File metadata
- Download URL: torchmultimodal_nightly-2024.1.13-py38-none-any.whl
- Upload date:
- Size: 256.4 kB
- Tags: Python 3.8
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.8.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 72db1918d22e3aac008619a90a5c9b54b0483de5bfa416ba60c79fca6a6f7211 |
|
MD5 | 46f649dded5a225c23dc00073aa6e2d2 |
|
BLAKE2b-256 | f295f4c335393de04440f8cd22b2ae89c38ba25a5b99876816c57051b278861f |