MambaVision: A Hybrid Mamba-Transformer Vision Backbone
Project description
MambaVision: A Hybrid Mamba-Transformer Vision Backbone
Official PyTorch implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone.
For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing
MambaVision demonstrates a strong performance by achieving a new SOTA Pareto-front in terms of Top-1 accuracy and throughput.
We introduce a novel mixer block by creating a symmetric path without SSM to enhance the modeling of global context:
MambaVision has a hierarchial architecture that employs both self-attention and mixer blocks:
💥 News 💥
-
[07.11.2024] Mambavision pip package is released !
-
[07.10.2024] We have released the code and model checkpoints for Mambavision !
Quick Start
Classification
We can import pre-trained MambaVision models with 1 line of code:
pip install mambavision
A pretrained MambaVision model with default hyper-parameters can be created as in:
>>> from mambavision import create_model
# Define mamba_vision_T model with 224 x 224 resolution
>>> model = create_model('mamba_vision_T',
pretrained=True,
model_path="/tmp/mambavision_tiny_1k.pth.tar")
model_path
is used to set the directory to download the model.
We can also simply test the model by passing a dummy input image. The output is the logits:
>>> import torch
>>> image = torch.rand(1, 3, 224, 224)
>>> output = model(image) # torch.Size([1, 1000])
Results + Pretrained Models
ImageNet-1K
MambaVision ImageNet-1K Pretrained Models
Name | Acc@1(%) | Acc@5(%) | Throughput(Img/Sec) | Resolution | #Params(M) | FLOPs(G) | Download |
---|---|---|---|---|---|---|---|
MambaVision-T | 82.3 | 96.2 | 6298 | 224x224 | 31.8 | 4.4 | model |
MambaVision-T2 | 82.7 | 96.3 | 5990 | 224x224 | 35.1 | 5.1 | model |
MambaVision-S | 83.3 | 96.5 | 4700 | 224x224 | 50.1 | 7.5 | model |
MambaVision-B | 84.2 | 96.9 | 3670 | 224x224 | 97.7 | 15.0 | model |
MambaVision-L | 85.0 | 97.1 | 2190 | 224x224 | 227.9 | 34.9 | model |
MambaVision-L2 | 85.3 | 97.2 | 1021 | 224x224 | 241.5 | 37.5 | model |
Installation
We provide a docker file. In addition, assuming that a recent PyTorch package is installed, the dependencies can be installed by running:
pip install -r requirements.txt
Evaluation
The MambaVision models can be evaluated on ImageNet-1K validation set using the following:
python validate.py \
--model <model-name>
--checkpoint <checkpoint-path>
--data_dir <imagenet-path>
--batch-size <batch-size-per-gpu
Here --model
is the MambaVision variant (e.g. mambavision_tiny_1k
), --checkpoint
is the path to pretrained model weights, --data_dir
is the path to ImageNet-1K validation set and --batch-size
is the number of batch size. We also provide a sample script here.
Star History
Licenses
Copyright © 2024, NVIDIA Corporation. All rights reserved.
This work is made available under the NVIDIA Source Code License-NC. Click here to view a copy of this license.
For license information regarding the timm repository, please refer to its repository.
For license information regarding the ImageNet dataset, please see the ImageNet official website.
Acknowledgement
This repository is built on top of the timm repository. We thank Ross Wrightman for creating and maintaining this high-quality library.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for mambavision-1.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 025057360548fcf9c2f6517034963b6d92a3aefe5fb82ecb3f6b783a4cfec5c6 |
|
MD5 | f7c460d1d25d41d10d9af9b8160d284d |
|
BLAKE2b-256 | 9c2c20b46e6ba6f14064df2239daac4e13c62d58bdc683e5cccacdf96fc13c83 |