Vision Llama - Pytorch
Project description
Vision LLama
Implementation of VisionLLaMA from the paper: "VisionLLaMA: A Unified LLaMA Interface for Vision Tasks" in PyTorch and Zeta. PAPER LINK
install
$ pip install vision-llama
usage
import torch
from vision_llama.main import VisionLlama
# Forward Tensor
x = torch.randn(1, 3, 224, 224)
# Create an instance of the VisionLlamaBlock model with the specified parameters
model = VisionLlama(
dim=768, depth=12, channels=3, heads=12, num_classes=1000
)
# Print the shape of the output tensor when x is passed through the model
print(model(x))
License
MIT
Citation
@misc{chu2024visionllama,
title={VisionLLaMA: A Unified LLaMA Interface for Vision Tasks},
author={Xiangxiang Chu and Jianlin Su and Bo Zhang and Chunhua Shen},
year={2024},
eprint={2403.00522},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
todo
- Implement the AS2DRoPE rope
- Implement the GSA attention
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
vision_llama-0.0.7.tar.gz
(6.4 kB
view hashes)
Built Distribution
Close
Hashes for vision_llama-0.0.7-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 33ddbbac9266ba2d8440bf78d41ba0c3af2c9520a85d62ab0bd0cff9d43ffc1f |
|
MD5 | 42ca9927f102c45088d9f72226d6c1f4 |
|
BLAKE2b-256 | dcec740a5dc1527a25bf2b27667a2cc95f986beb2b681789f5adaead78af5de5 |