Skip to main content

Fast and memory efficient PyTorch implementation of the Perceiver with FlashAttention.

Project description

FlashPerceiver

Fast and memory efficient PyTorch implementation of the Perceiver [1, 2, 3] architecture with FlashAttention [4, 5] as attention backbone.

Features:

  • :zap: More than 2x speedup over naive implementation.
  • :zap: Sub-linear1 memory usage with respect to input sequence length and linear usage with respect to number of latent vectors.
  • :zap: Out-of-the-box support for rotary positional embeddings [6]
  • :zap: Uses the new and improved FlashAttention-2 implementation
  • :zap: Support for multiple inputs and flexible masking

1 For the attention components. See Performance for more information.

Installation

Note: The pyproject.toml has recently been removed from the flash-attn repository and so did the PEP 517 compliance. This means that the flash-attn cannot be declared as dependency for this project anymore and thus needs to be manually until the situation changes in the future:

pip install flash-attn --no-build-isolation

Afterwards, install the actual flash-perceiver package:

pip install flash-perceiver

Usage

Perceiver

The Perceiver architecture

import torch

from flash_perceiver import Perceiver, utils

batch_size, seq_len, in_dim = 32, 128, 256

latent_dim = 512
num_latents = 512
out_dim = 128

model = Perceiver(
    input_dim=in_dim,
    depth=8,
    output_dim=out_dim,
    num_latents=num_latents,
    latent_dim=latent_dim,
    cross_heads=1,
    cross_head_dim=64,
    cross_rotary_emb_dim=0,
    cross_attn_dropout=0.0,
    latent_heads=8,
    latent_head_dim=64,
    latent_rotary_emb_dim=0,
    latent_attn_dropout=0.0,
    weight_tie_layers=False,
    gated_mlp=True,
    self_per_cross_attn=1,
    num_zero_tokens=None,
    use_flash_attn=True,
).cuda()

data = torch.randn(batch_size, seq_len, in_dim, device='cuda')

# `out_dim` specified; averages and projects output
# Note: FlashAttention only supports half-precision.
#  We need to use `torch.autocast` for the forward-pass
with torch.autocast('cuda'):
    out = model(data)

assert out.shape == (32, out_dim)

Multiple inputs

A separate input for each cross-attention block can be used by providing a list of inputs to the forward method. The number of inputs must correspond to the depth configuration of the model.

By providing a list of integers to the input_dim argument in the constructor, each input can be configured to have a different dimension.

input_dims = [256, 512]

model = Perceiver(
    input_dim=input_dims,
    depth=2,  # must equal len(input_dim)
).cuda()

inputs = [
    torch.randn(batch_size, seq_len, in_dim, device='cuda')
    for in_dim in input_dims
]

with torch.autocast('cuda'):
    out = model(inputs)

assert out.shape == (batch_size, num_latents, latent_dim)

Masking

A boolean element-wise mask for the input can be provided. All non-True elements will be masked out within the cross-attention operation. If a list of inputs is provided, a list of masks for each input can be provided as well. This can also include None values for inputs without a mask.

mask = utils.random_mask(data)  # [batch_size, seq_len]

with torch.autocast('cuda'):
    out = model(data, mask=mask)

Extract Embeddings

If a value for output_dim has been provided to the constructor, the final latent vectors will be averaged and then projected to the desired dimension. To extract the representations prior to the projecting step, set return_embeddings=True:

with torch.autocast('cuda'):
    embeds = model(data, return_embeddings=True)

assert embeds.shape == (32, num_latents, latent_dim)

Custom Latents

For some applications it can be useful to have custom sets of latent vectors. For instance, for a multi-task setting, each task could have a separate set of learned latents.

The forward method supports custom latents via the latents argument. If not explicitly provided, the module's latent vectors will be used, otherwise the provided ones. These must have shape [m, latent_dim] or [batch_size, n, latent_dim] where $m$ can be arbitrary.

To disable initializing random latent vectors as part of the model construction, pass num_latents=None to the constructor.

Extract Attention Weights

:warning: This is an experimental feature and requires a modified implementation of FlashAttention until the changes are eventually merged.

return_attn_weights=True can be passed to the forward method of a model to extract the normalized attention weights of each attention layer. A tuple of (output, attn_weights) will be returned in this case, where attn_weights is a list with one tensor per attention layer. This list follows the pattern [cross_attn_0, self_attn_0_0, ..., cross_attn_1, self_attn_1_0] where attention maps for cross-attention layers will have shape (batch_size, cross_heads, num_latents, seq_len) and self-attention maps have shape (batch_size, latent_heads, num_latents, num_latents).

with torch.autocast('cuda'):
    out, all_attn_weights = model(data, return_attn_weights=True)

for i, attn_weights in enumerate(all_attn_weights):
    if i % model.num_attention_layers_per_block == 0:
        print('cross-attention map with shape', attn_weights.shape)
    else:
        print('self-attention map with shape', attn_weights.shape)

PerceiverIO

The PerceiverIO is a variant of the Perceiver architecture where the encoder tower is followed by a decoder module that allows task specific computation of outputs via sets of queries.

This makes the architecture more flexible and can be used for cases such position specific decoding of values or multi-task settings.

The PerceiverIO architecture

import torch

from flash_perceiver import PerceiverIO, utils

batch_size, seq_len, in_dim = 32, 128, 256

depth = 8
latent_dim = 512
num_latents = 512
query_dim = 128
num_queries = 32
proj_dim = 64

model = PerceiverIO(
    input_dim=in_dim,
    query_dim=query_dim,
    depth=depth,
    proj_dim=proj_dim,
    num_latents=num_latents,
    latent_dim=latent_dim,
    cross_heads=1,
    cross_head_dim=64,
    cross_rotary_emb_dim=0,
    cross_attn_dropout=0.0,
    latent_heads=8,
    latent_head_dim=64,
    latent_rotary_emb_dim=0,
    latent_attn_dropout=0.0,
    query_heads=1,
    query_head_dim=64,
    query_rotary_emb_dim=0,
    query_attn_dropout=0.0,
    weight_tie_layers=False,
    gated_mlp=True,
    use_flash_attn=True,
).cuda()

data = torch.randn(batch_size, seq_len, in_dim, device='cuda')

# Can be learned or correspond to positions, tokens, etc.
queries = torch.randn(num_queries, query_dim, device='cuda')

with torch.autocast('cuda'):
    out = model(data, queries=queries)

assert out.shape == (batch_size, num_queries, proj_dim)

Examples

Other usage examples are provided in the examples/ folder.

Performance

The Perceiver is already designed and intended as an attention architecture with sub-quadratic compute and memory complexity in comparison to the quadratic requirements of a vanilla Transformer.

A naive implementation will have $\mathcal{O}(nm)$ memory usage for the cross-attention modules and $\mathcal{O}(n^2)$ complexity for the self-attention or latent blocks, where $n$ the number of input elements , $m$ the number of latent vectors (fixed hyperparameter) and $n \gg m$ should generally apply.

FlashAttention can reduce the memory usage to $\mathcal{O}(\sqrt{nm})$ for the cross-attention layers and $\mathcal{O}(m)$ for the latent self-attention layers. However, this only accounts for the computation of the attention mechanism. The input sequence and corresponding keys and values within the cross-attention modules will still grow with $n$.

Until the latter starts to dominate memory usage, this implementation allows to greatly scale the input sequence length. For instance, 16x larger input lengths can be achieved in comparison to perceiver-pytorch on a RTX 4090, keeping the other hyperparameters fixed (see run_benchmarks.py for the exact configuration).

Benchmarks

Benchmarks against other implementations (currently only perceiver-pytorch can be performed with:

python run_benchmarks.py

The script will create a benchmark_results.csv. The create_plots.py script can then be used to create plots.

The following data has been obtained with a RTX 4090 and 24GB of VRAM.

Benchmark results on speedup

Benchmark results on memory usage reduction

Note: The batch size for each configuration corresponds to the smallest value that works for all implementations. Especially for longer sequence lengths, this leads to decreasing GPU utilization and thus a lower speedup than theoretically possible. There are some ways to fix this, but my attempts so far have led to distorted results.

Acknowledgements

The implementation is inspired by lucidrain's Perceiver implementation and would not have been possible without Tri Dao's FlashAttention.

Planned features

These are a few features that are either planned or WIP. If you have urgent demand for some of them, feel free to write an issue:

  • Perceiver IO [2]
  • Perceiver AR [3] (or an AR demo in general)
  • Demos
  • Tests (see tests/)
  • Allow more flexible cross-attention configurations
  • Benchmarks against other Perceiver implementations, e.g. DeepMind's or Krasser's
  • If FA2 is eventuelly merged into PyTorch, drop the flash-attn dependency
  • Configure and provide multiple inputs as dict
  • TensorDict / tensorclass inputs
  • Extract attention weights

References

[1] Jaegle, Andrew, Felix Gimeno, Andrew Brock, Andrew Zisserman, Oriol Vinyals, and Joao Carreira. “Perceiver: General Perception with Iterative Attention.” arXiv, June 22, 2021. http://arxiv.org/abs/2103.03206.

[2] Jaegle, Andrew, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, et al. “Perceiver IO: A General Architecture for Structured Inputs & Outputs.” arXiv, March 15, 2022. http://arxiv.org/abs/2107.14795.

[3] Hawthorne, Curtis, Andrew Jaegle, Cătălina Cangea, Sebastian Borgeaud, Charlie Nash, Mateusz Malinowski, Sander Dieleman, et al. “General-Purpose, Long-Context Autoregressive Modeling with Perceiver AR.” arXiv, June 14, 2022. http://arxiv.org/abs/2202.07765.

[4] Dao, Tri, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. “FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness.” arXiv, June 23, 2022. https://doi.org/10.48550/arXiv.2205.14135.

[5] Dao, Tri. “FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning.” arXiv, July 17, 2023. https://doi.org/10.48550/arXiv.2307.08691.

[6] Su, Jianlin, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. “RoFormer: Enhanced Transformer with Rotary Position Embedding.” arXiv, August 8, 2022. https://doi.org/10.48550/arXiv.2104.09864.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flash_perceiver-0.1.11.tar.gz (17.5 kB view details)

Uploaded Source

Built Distribution

flash_perceiver-0.1.11-py3-none-any.whl (15.3 kB view details)

Uploaded Python 3

File details

Details for the file flash_perceiver-0.1.11.tar.gz.

File metadata

  • Download URL: flash_perceiver-0.1.11.tar.gz
  • Upload date:
  • Size: 17.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.10.9 Linux/5.15.0-86-generic

File hashes

Hashes for flash_perceiver-0.1.11.tar.gz
Algorithm Hash digest
SHA256 5175f668cb8a12660ed26d5ba63355e2b223001e42fe2e8368d42fdce5aff5fb
MD5 69888695c3c29413701e8e7b0989dcbd
BLAKE2b-256 7e08601a28e648ddfb155bcec9bd31e1c25204f4808f695c1e4fd36dc098730b

See more details on using hashes here.

File details

Details for the file flash_perceiver-0.1.11-py3-none-any.whl.

File metadata

  • Download URL: flash_perceiver-0.1.11-py3-none-any.whl
  • Upload date:
  • Size: 15.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.10.9 Linux/5.15.0-86-generic

File hashes

Hashes for flash_perceiver-0.1.11-py3-none-any.whl
Algorithm Hash digest
SHA256 e8595ff3b5da1cf0c362d2c21eebc5f2abc056486d3cbb299802fb6efb37798e
MD5 bb3e126de92427bd8e9ae6673c9eefb8
BLAKE2b-256 992fee203e78b5097e339d6199db3a088491a6dc6e1a57def0e75cf874328e93

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page