Skip to main content

vLLM plugin for Qwerky AI MambaInLlama hybrid models

Project description

Qwerky vLLM Models

A vLLM plugin for serving Qwerky AI's MambaInLlama hybrid models without the --trust-remote-code flag.

Installation

pip install vllm qwerky-vllm-models

Usage

After installing, serve Qwerky models with vLLM:

vllm serve QwerkyAI/Qwerky-Llama3.2-Mamba-3B-Llama3.3-70B-base-distill --max-model-len 4096

The plugin automatically registers the model architecture with vLLM on import.

Supported Models

  • QwerkyAI/Qwerky-Llama3.2-Mamba-3B-Llama3.3-70B-base-distill

How It Works

This package uses vLLM's plugin system (vllm.general_plugins entry point) to register the MambaInLlama model architecture. This means:

  • No fork of vLLM required
  • No --trust-remote-code flag needed
  • Works with standard vLLM installation
  • Uses vLLM's native Triton-accelerated Mamba kernels

Requirements

  • Python >= 3.10
  • vLLM >= 0.14.0
  • PyTorch >= 2.0.0

Changelog

0.2.9

  • Fixed weight loading: split fused mha.in_proj into separate q/k/v projections
  • Renamed mha.out_proj to o_proj for checkpoint compatibility
  • Should now load all ~395 parameters instead of just 163

0.2.8

  • Fixed dtype mismatch in SSM scan: F.softplus/torch.exp compute in float32, now cast back to original dtype
  • This caused "expected BFloat16 but found Float" error in einsum

0.2.7

  • Fixed tensor broadcasting bug in _ssm_scan: A.unsqueeze(0).unsqueeze(-1) -> A.unsqueeze(0).unsqueeze(2)
  • This caused shape mismatch (8192 vs 16) during SSM discretization

0.2.6

  • Added embed_input_ids method required by vLLM's VllmModelForTextGeneration interface
  • This was the root cause of "This model does not support --runner generate" error

0.2.5

  • Fixed vLLM runner detection: added MambaInLlamaMambaForCausalLM alias for HF config compatibility
  • Added proper protocol inheritance (HasInnerState, IsHybrid) from vllm.model_executor.models.interfaces
  • Fixed class variable type hints (ClassVar[Literal[True]]) for vLLM model inspection
  • Simplified model registration code

0.2.4

  • Complete architecture rewrite with explicit state cache management
  • Separate prefill and decode paths for Mamba layers
  • Grouped-head Mamba support (num_xb_head, num_C_head, repeat_group)
  • Pure PyTorch SSM implementation (preparing for vLLM Triton op integration)

0.2.3

  • Fixed d_xb default value computation in configuration
  • Removed unsupported device/dtype kwargs from RMSNorm calls

0.2.2

  • Fixed vLLM 0.14+ compatibility issues with Mamba ops API

0.2.1

  • Updated README, removed SFT model reference

0.2.0

  • Initial public release with vLLM plugin system integration

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

qwerky_vllm_models-0.2.10.tar.gz (16.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

qwerky_vllm_models-0.2.10-py3-none-any.whl (16.8 kB view details)

Uploaded Python 3

File details

Details for the file qwerky_vllm_models-0.2.10.tar.gz.

File metadata

  • Download URL: qwerky_vllm_models-0.2.10.tar.gz
  • Upload date:
  • Size: 16.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for qwerky_vllm_models-0.2.10.tar.gz
Algorithm Hash digest
SHA256 0fd075bf54c51314d83e1ec87224f71e0b84f1d283e0da8e36bf001e85cd831b
MD5 1e9b4ba220ea9428fa911f60f2a0ce53
BLAKE2b-256 7d86611837fe8be120dee3281ea9090c4ae3b06f8603b8d26198b96f8955e661

See more details on using hashes here.

File details

Details for the file qwerky_vllm_models-0.2.10-py3-none-any.whl.

File metadata

File hashes

Hashes for qwerky_vllm_models-0.2.10-py3-none-any.whl
Algorithm Hash digest
SHA256 d51371980670f7f08926f66efac95c30c64c9bbef870d61d10b62baf98e6c8bb
MD5 d72f2e44602b1765b166a7917c8dd131
BLAKE2b-256 28ba875647ea9a7241b7f88146d156becff98ae5a7028c59fe9244e4422be5c8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page