Skip to main content

MambaVision: A Hybrid Mamba-Transformer Vision Backbone

Project description

MambaVision: A Hybrid Mamba-Transformer Vision Backbone

Official PyTorch implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone.

Star on GitHub

Ali Hatamizadeh and Jan Kautz.

For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing


MambaVision demonstrates a strong performance by achieving a new SOTA Pareto-front in terms of Top-1 accuracy and throughput.

We introduce a novel mixer block by creating a symmetric path without SSM to enhance the modeling of global context:

MambaVision has a hierarchical architecture that employs both self-attention and mixer blocks:

teaser

💥 News 💥

  • [07.24.2024] MambaVision Hugging Face models are released !

  • [07.14.2024] We added support for processing any resolution images.

  • [07.12.2024] Paper is now available on arXiv !

  • [07.11.2024] Mambavision pip package is released !

  • [07.10.2024] We have released the code and model checkpoints for Mambavision !

Quick Start

Hugging Face (Classification + Feature extraction)

Pretrained MambaVision models can be simply used via Hugging Face library with 1 line of code. First install the requirements:

pip install mambavision

The model can be simply imported:

>>> from transformers import AutoModelForImageClassification

>>> model = AutoModelForImageClassification.from_pretrained("nvidia/MambaVision-T-1K", trust_remote_code=True)

We demonstrate an end-to-end image classification example in the following.

Given the following image from COCO dataset val set as an input:

The following snippet can be used:

from transformers import AutoModelForImageClassification
from PIL import Image
from timm.data.transforms_factory import create_transform
import requests

model = AutoModelForImageClassification.from_pretrained("nvidia/MambaVision-T-1K", trust_remote_code=True)

# eval mode for inference
model.cuda().eval()

# prepare image for the model
url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
image = Image.open(requests.get(url, stream=True).raw)
input_resolution = (3, 224, 224)  # MambaVision supports any input resolutions

transform = create_transform(input_size=input_resolution,
                             is_training=False,
                             mean=model.config.mean,
                             std=model.config.std,
                             crop_mode=model.config.crop_mode,
                             crop_pct=model.config.crop_pct)

inputs = transform(image).unsqueeze(0).cuda()
# model inference
outputs = model(inputs)
logits = outputs['logits'] 
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])

The predicted label is brown bear, bruin, Ursus arctos.

You can also use Hugging Face MambaVision models for feature extraction. The model provides the outputs of each stage of model (hierarchical multi-scale features in 4 stages) as well as the final averaged-pool features that are flattened. The former is used for downstream tasks such as classification and detection.

The following snippet can be used for feature extraction:

from transformers import AutoModel
from PIL import Image
from timm.data.transforms_factory import create_transform
import requests

model = AutoModel.from_pretrained("nvidia/MambaVision-T-1K", trust_remote_code=True)

# eval mode for inference
model.cuda().eval()

# prepare image for the model
url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
image = Image.open(requests.get(url, stream=True).raw)
input_resolution = (3, 224, 224)  # MambaVision supports any input resolutions

transform = create_transform(input_size=input_resolution,
                             is_training=False,
                             mean=model.config.mean,
                             std=model.config.std,
                             crop_mode=model.config.crop_mode,
                             crop_pct=model.config.crop_pct)
inputs = transform(image).unsqueeze(0).cuda()
# model inference
out_avg_pool, features = model(inputs)
print("Size of the averaged pool features:", out_avg_pool.size())  # torch.Size([1, 640])
print("Number of stages in extracted features:", len(features)) # 4 stages
print("Size of extracted features in stage 1:", features[0].size()) # torch.Size([1, 80, 56, 56])
print("Size of extracted features in stage 4:", features[3].size()) # torch.Size([1, 640, 7, 7])

Currently, we offer MambaVision-T-1K, MambaVision-T2-1K, MambaVision-S-1K, MambaVision-B-1K, MambaVision-L-1K and MambaVision-L2-1K on Hugging Face. All models can also be viewed here.

Classification (pip package)

We can also import pre-trained MambaVision models from the pip package with 1 line of code:

pip install mambavision

A pretrained MambaVision model with default hyper-parameters can be created as in:

>>> from mambavision import create_model

# Define mamba_vision_T model

>>> model = create_model('mamba_vision_T', pretrained=True, model_path="/tmp/mambavision_tiny_1k.pth.tar")

Available list of pretrained models include mamba_vision_T, mamba_vision_T2, mamba_vision_S, mamba_vision_B, mamba_vision_L and mamba_vision_L2.

We can also simply test the model by passing a dummy image with any resolution. The output is the logits:

>>> import torch

>>> image = torch.rand(1, 3, 512, 224).cuda() # place image on cuda
>>> model = model.cuda() # place model on cuda
>>> output = model(image) # output logit size is [1, 1000]

Using the pretrained models from our pip package, you can simply run validation:

python validate_pip_model.py --model mamba_vision_T --data_dir=$DATA_PATH --batch-size $BS 

FAQ

  1. Does MambaVision support processing images with any input resolutions ?

Yes ! you can pass images with any arbitrary resolutions without the need to change the model.

  1. Can I apply MambaVision for downstream tasks like detection, segmentation ?

Yes ! we are working to have it released very soon. But employing MambaVision backbones for these tasks is very similar to other models in mmseg or mmdet packages. In addition, MambaVision Hugging Face models provide feature extraction capablity which can be used for downstream tasks. Please see the above example.

  1. I am interested in re-implementing MambaVision in my own repository. Can we use the pretrained weights ?

Yes ! the pretrained weights are released under CC-BY-NC-SA-4.0. Please submit an issue in this repo and we will add your repository to the README of our codebase and properly acknowledge your efforts.

Results + Pretrained Models

ImageNet-1K

MambaVision ImageNet-1K Pretrained Models

Name Acc@1(%) Acc@5(%) Throughput(Img/Sec) Resolution #Params(M) FLOPs(G) Download
MambaVision-T 82.3 96.2 6298 224x224 31.8 4.4 model
MambaVision-T2 82.7 96.3 5990 224x224 35.1 5.1 model
MambaVision-S 83.3 96.5 4700 224x224 50.1 7.5 model
MambaVision-B 84.2 96.9 3670 224x224 97.7 15.0 model
MambaVision-L 85.0 97.1 2190 224x224 227.9 34.9 model
MambaVision-L2 85.3 97.2 1021 224x224 241.5 37.5 model

Installation

We provide a docker file. In addition, assuming that a recent PyTorch package is installed, the dependencies can be installed by running:

pip install -r requirements.txt

Evaluation

The MambaVision models can be evaluated on ImageNet-1K validation set using the following:

python validate.py \
--model <model-name>
--checkpoint <checkpoint-path>
--data_dir <imagenet-path>
--batch-size <batch-size-per-gpu

Here --model is the MambaVision variant (e.g. mambavision_tiny_1k), --checkpoint is the path to pretrained model weights, --data_dir is the path to ImageNet-1K validation set and --batch-size is the number of batch size. We also provide a sample script here.

Citation

If you find MambaVision to be useful for your work, please consider citing our paper:

@article{hatamizadeh2024mambavision,
  title={MambaVision: A Hybrid Mamba-Transformer Vision Backbone},
  author={Hatamizadeh, Ali and Kautz, Jan},
  journal={arXiv preprint arXiv:2407.08083},
  year={2024}
}

Star History

Star History Chart

Licenses

Copyright © 2024, NVIDIA Corporation. All rights reserved.

This work is made available under the NVIDIA Source Code License-NC. Click here to view a copy of this license.

The pre-trained models are shared under CC-BY-NC-SA-4.0. If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.

For license information regarding the timm repository, please refer to its repository.

For license information regarding the ImageNet dataset, please see the ImageNet official website.

Acknowledgement

This repository is built on top of the timm repository. We thank Ross Wrightman for creating and maintaining this high-quality library.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mambavision-1.0.9.tar.gz (46.9 kB view details)

Uploaded Source

Built Distribution

mambavision-1.0.9-py3-none-any.whl (56.9 kB view details)

Uploaded Python 3

File details

Details for the file mambavision-1.0.9.tar.gz.

File metadata

  • Download URL: mambavision-1.0.9.tar.gz
  • Upload date:
  • Size: 46.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.7.10

File hashes

Hashes for mambavision-1.0.9.tar.gz
Algorithm Hash digest
SHA256 b8029ac7922046c65c847e4b3b752d4d3f37dfa6acf5894ae58e1c213620803d
MD5 515459fd6d677203f2c65ea33cbebf17
BLAKE2b-256 f1a83c8d99866d85ae65e22c0d4568842e69a6a5c24ac8620c9fe426c5ae8d05

See more details on using hashes here.

File details

Details for the file mambavision-1.0.9-py3-none-any.whl.

File metadata

  • Download URL: mambavision-1.0.9-py3-none-any.whl
  • Upload date:
  • Size: 56.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.7.10

File hashes

Hashes for mambavision-1.0.9-py3-none-any.whl
Algorithm Hash digest
SHA256 98f0a186b0e93613b122706191cc8fda7d69f7f961bf6c9d2cb91292d8c6bb0b
MD5 44c6ae584f86b6054323501d73df8419
BLAKE2b-256 155e72f64a39e868a38e5f52e0d6f7dec2236f4d525cb848160c0c849ac910b4

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page