Vision Llama - Pytorch
Project description
Vision LLama
Implementation of VisionLLaMA from the paper: "VisionLLaMA: A Unified LLaMA Interface for Vision Tasks" in PyTorch and Zeta. PAPER LINK
install
$ pip install vision-llama
usage
import torch
from vision_llama import VisionLlamaBlock
# Create a random tensor of shape (1, 3, 224, 224)
x = torch.randn(1, 3, 224, 224)
# Create an instance of the VisionLlamaBlock model with the specified parameters
model = VisionLlamaBlock(768, 12, 3, 12)
# Print the shape of the output tensor when x is passed through the model
print(model(x).shape)
# Print the output tensor when x is passed through the model
print(model(x))
License
MIT
Citation
@misc{chu2024visionllama,
title={VisionLLaMA: A Unified LLaMA Interface for Vision Tasks},
author={Xiangxiang Chu and Jianlin Su and Bo Zhang and Chunhua Shen},
year={2024},
eprint={2403.00522},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
todo
- Implement the AS2DRoPE rope
- Implement the GSA attention
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
vision_llama-0.0.5.tar.gz
(5.0 kB
view hashes)
Built Distribution
Close
Hashes for vision_llama-0.0.5-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 922f3f11c2714a3da914a3000ad6b056fe0ee49e2ee53937267af1728a334ace |
|
MD5 | 18acd833bf7f164e49f98bab01552d7b |
|
BLAKE2b-256 | 650377be59310d90846f2c00c77002a9cfeb0b6dc6b42d051415cb5fd74b5493 |