Skip to main content

A fork of perceiver-pytorch that supports multiple modalities for the Perceiver architecture.

Project description

PyPI PyPI PyPI

Multi Modality Perceiver - Pytorch

Implementation of Perceiver, with support for multi-modality inputs. Fork of (lucidrains repo)[https://github.com/lucidrains/perceiver-pytorch] extended for multi-modality and support for text embedding splits chunking across layers.

Install

To install the Perceiver implementation with multi-modality (also includes without multi-modality):

$ pip install perceiver-multi-modality-pytorch

Import with:

from perceiver_pytorch.modalities import modality_encoding
from perceiver_pytorch.multi_modality_perceiver import  MultiModalityPerceiver, InputModality

See tests/test_multimodality_perceiver.py or

from perceiver_pytorch.modalities import InputModalityWithEmbedding
from perceiver_pytorch.multi_modality_with_text_perceiver import MultiModalityWithTextPerceiver

See tests/test_multimodality_with_text_perceiver.py

To install the Perceiver implementation, follow instructions at the (lucidrains repo)[https://github.com/lucidrains/perceiver-pytorch]:

Usage

import torch
from perceiver_pytorch import Perceiver

model = Perceiver(
    input_channels = 3,          # number of channels for each token of the input
    input_axis = 2,              # number of axis for input data (2 for images, 3 for video)
    num_freq_bands = 6,          # number of freq bands, with original value (2 * K + 1)
    max_freq = 10.,              # maximum frequency, hyperparameter depending on how fine the data is
    depth = 6,                   # depth of net
    num_latents = 256,           # number of latents, or induced set points, or centroids. different papers giving it different names
    latent_dim = 512,            # latent dimension
    cross_heads = 1,             # number of heads for cross attention. paper said 1
    latent_heads = 8,            # number of heads for latent self attention, 8
    cross_dim_head = 64,
    latent_dim_head = 64,
    num_classes = 1000,          # output number of classes
    attn_dropout = 0.,
    ff_dropout = 0.,
    weight_tie_layers = False    # whether to weight tie layers (optional, as indicated in the diagram)
)

img = torch.randn(1, 224, 224, 3) # 1 imagenet image, pixelized

model(img) # (1, 1000)

Multi-modality perceiver

An attractive feature of the perceiver architecture is that it can process multiple modalities of data in the same batch. This is not obvious from the perceiver forward signature shown above, but a relatively modest change can support processing video, images and audio with a single model, in one forward. This feature is demonstrated by the MultiModalityPerceiver, contributed by Fabien Campagne.

from perceiver_pytorch.multi_modality_perceiver import MultiModalityPerceiver, InputModality


image_inputs= torch.rand(size=(3, 260, 260, 3), requires_grad=True)
video_inputs= torch.rand(size=(3, 32, 260, 260, 3), requires_grad=True)
audio_inputs= torch.rand(size=(3, 44100,1), requires_grad=True)

video_modality = InputModality(
    name='video',
    input_channels=3,  # number of channels for each token of the input
    input_axis=3,  # number of axes, 3 for video)
    num_freq_bands=6,  # number of freq bands, with original value (2 * K + 1)
    max_freq=4.,  # maximum frequency, hyperparameter depending on how fine the data is
)
image_modality = InputModality(
    name='image',
    input_channels=3,  # number of channels for each token of the input
    input_axis=2,  # number of axes, 2 for images
    num_freq_bands=6,  # number of freq bands, with original value (2 * K + 1)
    max_freq=4.,  # maximum frequency, hyperparameter depending on how fine the data is
)
audio_modality = InputModality(
    name='audio',
    input_channels=1,  # number of channels for mono audio
    input_axis=1,  # number of axes, 2 for images
    num_freq_bands=6,  # number of freq bands, with original value (2 * K + 1)
    max_freq=8.,  # maximum frequency, hyperparameter depending on how fine the data is
)
model = MultiModalityPerceiver(
    modalities=(video_modality, image_modality, audio_modality),
    depth=6,  # depth of net
    num_latents=12,
    # number of latents, or induced set points, or centroids. different papers giving it different names
    latent_dim=64,  # latent dimension
    cross_heads=1,  # number of heads for cross attention. paper said 1
    latent_heads=8,  # number of heads for latent self attention, 8
    cross_dim_head=64,
    latent_dim_head=64,
    num_classes=1000,  # output number of classes
    attn_dropout=0.,
    ff_dropout=0.,
    weight_tie_layers=True
    # whether to weight tie layers (optional, as indicated in the diagram)
)
result = model({'image': image_inputs,
                'video': video_inputs,
                'audio': audio_inputs})

Text perceiver

While the Perceiver architecture described by [jaegle2021perceiver] could support text if text was embedded and each dimension of the embedding provided as a channel in the input, this introduces a mismatch between the text embedding dimension (typically large, 512/768 or more) and the number of channels used for video and images (typically 3 channels, one for red, green and blue), or audio (1 for mono or 2 for stereo channels). When training text embeddings from scratch, this creates an opportunity, because there should be no need for the perceiver to attend to the entire text embedding in each layer. If we split the text embedding into as many chunks as there are layers in a perceiver, we reduce how much we need to pad other modalities, and introduce a structure to the learned embeddings, were parts of the text embedding can specialize according to the needs of each layer. The perceiver implementation provided in this repo can be used to explore the question of whether splitting text embeddings across layers is beneficial (you would compare the performance of MultiModalityWithTextPerceiver with that of MultiModalityPerceiver).

Citations

@misc{jaegle2021perceiver,
    title   = {Perceiver: General Perception with Iterative Attention},
    author  = {Andrew Jaegle and Felix Gimeno and Andrew Brock and Andrew Zisserman and Oriol Vinyals and Joao Carreira},
    year    = {2021},
    eprint  = {2103.03206},
    archivePrefix = {arXiv},
    primaryClass = {cs.CV}
}
@misc{campagne2021textperceiver,
    title   = {Adapting Perceiver for learning with text modalities},
    author  = {Fabien Campagne},
    year    = {2021},
    eprint  = {unpublished results},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file perceiver-multi-modality-pytorch-1.0.1.tar.gz.

File metadata

File hashes

Hashes for perceiver-multi-modality-pytorch-1.0.1.tar.gz
Algorithm Hash digest
SHA256 dda37bbcb05f181864b93e55d049fc249e0c03552e365b4355325d3bd569bb37
MD5 f09389953c488c82c563b595c3b51076
BLAKE2b-256 90a0f6d8b3f3899acc575683690e2655d05f4707bbd65235aa419808599f99ab

See more details on using hashes here.

File details

Details for the file perceiver_multi_modality_pytorch-1.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for perceiver_multi_modality_pytorch-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 68b0a3bd1a7055d4112b3cba5c6c1e5792ff1d207a25f0cc69ba2b5561f9c53a
MD5 e5e81030b876c72137a0697669adb92f
BLAKE2b-256 9fa6a42e93537add00d817f6ed01314307c0fe30cec795a1a177f549f5f401c6

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page