Skip to main content

Pure-PyTorch lightweight Mamba with multi-dilated causal conv front-end

Project description

lite-mamba

Publish Python Package Tests

A minimal, pure-PyTorch version of Mamba with a multi-dilated causal depthwise conv front-end. No CUDA/Triton build needed; works on CPU or GPU with standard PyTorch ops.

It also includes a separate TensorFlow implementation of the same architectural variants so you can use TF when PyTorch/CUDA kernels are not compatible with your environment.

Install

pip install torch einops
pip install lite-mamba

TensorFlow path (optional):

pip install "lite-mamba[tensorflow]"

Usage

from lite_mamba import Mamba, PTCNMamba, STCNMamba, DPWCMamba
import torch

x = torch.randn(2, 128, 512)  # (batch, seq, d_model)
m = Mamba(d_model=512, d_conv=3, conv_dilations=(1,2,4,8))
y = m(x)
print(y.shape)  # (2, 128, 512)

Conv front-end variants

  • PTCNMamba: identical to Mamba, mixes parallel dilated depthwise conv branches via learned softmax gates.
  • STCNMamba: runs the same depthwise conv layers in sequence (no gating); each branch output feeds the next to create a deterministic dilation stack.
  • DPWCMamba: pairs each depthwise branch with a pointwise (1×1) conv before the gating mix, adding extra channel mixing without stacking more layers.

All variants expose the same constructor signature (d_model, d_state, conv_dilations, etc.) and streaming helpers (allocate_inference_cache, step). Swap them simply by changing the imported class name:

from lite_mamba import STCNMamba

m = STCNMamba(d_model=512, d_state=16, conv_dilations=(1, 2, 4))

Use DPWCMamba for richer channel interactions in each branch, and STCNMamba when you want a straightforward sequential dilation pipeline (e.g., for debugging or reproducing the behavior of stacked TCN layers).

Baseline helper

BaselineMamba mirrors the upstream state-spaces/mamba block: a single depthwise causal convolution followed by the SSM parameter projection, selective scan recurrence, and streaming helpers. baseline_mamba is a thin functional alias that instantiates the class with the same defaults so you can reproduce the reference layout without duplicating constructor arguments.

from lite_mamba import BaselineMamba

m = BaselineMamba(d_model=512, d_conv=3)

TensorFlow variants (separate path)

TensorFlow classes mirror the same algorithmic layouts:

  • TFBaselineMamba
  • TFPTCNMamba
  • TFSTCNMamba
  • TFDPWCMamba
import tensorflow as tf
from lite_mamba import TFPTCNMamba

x = tf.random.normal((2, 128, 512))  # (batch, seq, d_model)
m = TFPTCNMamba(d_model=512, d_conv=3, conv_dilations=(1, 2, 4, 8))
y = m(x)
print(y.shape)  # (2, 128, 512)

API quick reference

Mamba(d_model, d_state=16, d_conv=4, conv_dilations=(1,), expand=2, dt_rank="auto", dt_min=0.001, dt_max=0.1, dt_init="random", dt_scale=1.0, dt_init_floor=1e-4, conv_bias=True, bias=False, use_fast_path=False, layer_idx=None, device=None, dtype=None)

  • d_model (int, required): input/output embedding size.
  • d_state (int, default 16): SSM state dimension per channel. Larger gives longer memory; increases compute.
  • d_conv (int, default 4): depthwise conv kernel size for each branch.
  • conv_dilations (tuple[int], default (1,)): dilation per branch. Multiple values create parallel dilated convs; effective receptive field is (d_conv-1)*dilation.
  • expand (float, default 2): inner width multiplier; sets d_inner = expand * d_model.
  • dt_rank (int or "auto", default "auto"): rank of delta projection. "auto" sets ceil(d_model/16).
  • dt_min, dt_max (float, defaults 1e-3 / 1e-1): log-uniform range for delta initialization.
  • dt_init ("random" | "constant", default "random") and dt_scale, dt_init_floor: control delta init magnitude/stability.
  • conv_bias (bool, default True): include bias in depthwise convs.
  • bias (bool, default False): include bias in input/output linear projections.
  • use_fast_path (bool): ignored in this pure-PyTorch build; kept for API compatibility.
  • layer_idx (int | None): identifier for streaming cache registration; required when using allocate_inference_cache + inference_params.
  • device, dtype: standard module factory kwargs.

Inference / streaming helpers

  • allocate_inference_cache(batch_size, max_seqlen, dtype=None): preallocates conv and SSM state buffers for step-wise decoding.
  • step(hidden_states, conv_state, ssm_state): single-token forward (expects hidden_states with shape (B, 1, d_model)).
  • forward(..., inference_params): if inference_params has cached states (with key_value_memory_dict and seqlen_offset), uses them for streaming.

Highlights

  • Multi-branch causal dilated convs (weighted sum via learned gates).
  • Pure Python: no custom C++/CUDA or Triton kernels.
  • Streaming support via per-branch conv states and SSM state caching.

Practical setups

  • Local modeling / small context: d_conv=3, conv_dilations=(1,2,4), d_state=8–16, expand=2.
  • Longer context: widen conv_dilations (e.g., (1,2,4,8,16)) or increase d_state to 32; expect higher memory/compute.
  • Streaming/AR decoding: call allocate_inference_cache once per layer, pass inference_params during forward; use step inside your generation loop.
  • Stability first: keep dt_min >= 1e-4 and dt_init_floor small; leave defaults unless you observe drift or exploding activations.

Notes

  • Set different conv_dilations to adjust receptive field; keep kernels small (e.g., 3–5) to avoid excessive padding.
  • use_fast_path flag is ignored here (kept for API compatibility).
  • Reference selective scan is implemented in PyTorch for portability; faster fused kernels are omitted intentionally.

License

Apache-2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lite_mamba-0.2.1.tar.gz (15.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lite_mamba-0.2.1-py3-none-any.whl (14.0 kB view details)

Uploaded Python 3

File details

Details for the file lite_mamba-0.2.1.tar.gz.

File metadata

  • Download URL: lite_mamba-0.2.1.tar.gz
  • Upload date:
  • Size: 15.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for lite_mamba-0.2.1.tar.gz
Algorithm Hash digest
SHA256 3d827094865d5115870541352d7d999faba75af9f32b521fb81e3927aa216fdb
MD5 c901e628611e8cf4c8317688fd02cbc2
BLAKE2b-256 905c10b79656a40cbcc031914dfeb56d8979942eb6586512240b1c62197b46c4

See more details on using hashes here.

File details

Details for the file lite_mamba-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: lite_mamba-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 14.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for lite_mamba-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 61e071d7a175eb3b938951a810921fc9f8ad84342637de579ea4cbeb4ac78af4
MD5 ae76f339a7bb255c2e91c933267a16cd
BLAKE2b-256 6acc8aeb48c91efc31384bb9b97ede382f59422786d92ab68b07de2269c612c4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page