Paper - Pytorch
Project description
Simba
A simpler Pytorch + Zeta Implementation of the paper: "SiMBA: Simplified Mamba-based Architecture for Vision and Multivariate Time series"
install
$ pip install simba-torch
usage
import torch
from simba_torch.main import Simba
# Forward pass with images
img = torch.randn(1, 3, 224, 224)
# Create model
model = Simba(
dim = 4, # Dimension of the transformer
dropout = 0.1, # Dropout rate for regularization
d_state=64, # Dimension of the transformer state
d_conv=64, # Dimension of the convolutional layers
num_classes=64, # Number of output classes
depth=8, # Number of transformer layers
patch_size=16, # Size of the image patches
image_size=224, # Size of the input image
channels=3, # Number of input channels
# use_pos_emb=True # If you want
)
# Forward pass
out = model(img)
print(out.shape)
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
simba_torch-0.0.4.tar.gz
(5.7 kB
view hashes)
Built Distribution
Close
Hashes for simba_torch-0.0.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | d44712c96ab520b3f98e0e889c722d8afb0c1571d27ca17d5976c1c736b6d654 |
|
MD5 | 7e5e4d6d396c7c0c3e3b5691d4871674 |
|
BLAKE2b-256 | c4168f68d5ab37429bb25682c9c38bfd06ba1062727e863e13fc55f95bd3f45f |