vLLM plugin for Qwerky AI MambaInLlama hybrid models
Project description
Qwerky vLLM Models
A vLLM plugin for serving Qwerky AI's MambaInLlama hybrid models without the --trust-remote-code flag.
Installation
pip install vllm qwerky-vllm-models
Usage
After installing, serve Qwerky models with vLLM:
vllm serve QwerkyAI/Qwerky-Llama3.2-Mamba-3B-Llama3.3-70B-base-distill --max-model-len 4096
The plugin automatically registers the model architecture with vLLM on import.
Supported Models
QwerkyAI/Qwerky-Llama3.2-Mamba-3B-Llama3.3-70B-base-distill
How It Works
This package uses vLLM's plugin system (vllm.general_plugins entry point) to register the MambaInLlama model architecture. This means:
- No fork of vLLM required
- No
--trust-remote-codeflag needed - Works with standard vLLM installation
- Uses vLLM's native Triton-accelerated Mamba kernels
Requirements
- Python >= 3.10
- vLLM >= 0.14.0
- PyTorch >= 2.0.0
Changelog
0.2.18
- Added extensive debug logging to diagnose attention layer shape issue
- Logs: input shape, batch_size, seq_len, Q/K/V shapes, rotary output, KV cache shapes
0.2.17
- Added debug logging in MHADecoderLayer to trace tensor shapes
0.2.16
- Fixed attention layer to handle vLLM's flattened 2D tensor format
- vLLM passes [total_tokens, hidden] but attention needs [batch, seq, hidden]
- Added automatic batch dimension handling in MHADecoderLayer
0.2.15
- Fixed attention layer KV cache shape mismatch
- Removed incorrect tensor transpositions in KV cache assignment
0.2.14
- Fixed
mamba_config.jsonloading - removedlocal_files_only=Truerestriction - Now properly downloads mamba_config.json from HuggingFace Hub if not cached
- Added more detailed logging for config loading
0.2.13
- CRITICAL FIX: Load
mamba_config.jsonforattn_layers,d_inner,d_xb - MambaInLlama models store Mamba-specific config in separate
mamba_config.jsonfile - Main
config.jsonhasmodel_type: "llama"without Mamba params - Fixed: Model was treating ALL layers as Mamba (attn_layers=[]) because config wasn't loaded
- Added better logging for weight loading diagnostics
- Attention layers at indices
[3, 8, 13, 18, 23, 27]now properly recognized
0.2.12
- CRITICAL FIX: Corrected
d_xbdefault to match qwerky-distill PR #81 d_xb = num_key_value_heads * head_dim(GQA-style, e.g., 8×128=1024 for 8B)- Fixed in_proj split:
[z(d_inner), x(d_inner), B(d_xb), C(d_xb), dt(dt_rank)] - Added repeat_kv expansion for C (same as B) in Mamba1 architecture
- Fixed head count:
num_heads = d_inner // d_stateafter B/C expansion
0.2.11
- CRITICAL FIX: Changed
d_innerdefault fromintermediate_sizetohidden_size - MambaInLlama Mamba layers use
d_inner = hidden_size, notintermediate_size - Fixed
d_xbdefault:hidden_size // 16(washidden_size // 4) - This fixes the shape mismatch for all Mamba layer weights (A_log, D, conv1d, dt_proj, in_proj, out_proj)
0.2.10
- Added debug logging to weight loading to diagnose parameter mapping issues
- Logs first 20 model params, first 20 checkpoint weights, and all skipped weights
0.2.9
- Fixed weight loading: split fused
mha.in_projinto separate q/k/v projections - Renamed
mha.out_projtoo_projfor checkpoint compatibility - Should now load all ~395 parameters instead of just 163
0.2.8
- Fixed dtype mismatch in SSM scan:
F.softplus/torch.expcompute in float32, now cast back to original dtype - This caused "expected BFloat16 but found Float" error in einsum
0.2.7
- Fixed tensor broadcasting bug in
_ssm_scan:A.unsqueeze(0).unsqueeze(-1)->A.unsqueeze(0).unsqueeze(2) - This caused shape mismatch (8192 vs 16) during SSM discretization
0.2.6
- Added
embed_input_idsmethod required by vLLM'sVllmModelForTextGenerationinterface - This was the root cause of "This model does not support
--runner generate" error
0.2.5
- Fixed vLLM runner detection: added
MambaInLlamaMambaForCausalLMalias for HF config compatibility - Added proper protocol inheritance (
HasInnerState,IsHybrid) fromvllm.model_executor.models.interfaces - Fixed class variable type hints (
ClassVar[Literal[True]]) for vLLM model inspection - Simplified model registration code
0.2.4
- Complete architecture rewrite with explicit state cache management
- Separate prefill and decode paths for Mamba layers
- Grouped-head Mamba support (
num_xb_head,num_C_head,repeat_group) - Pure PyTorch SSM implementation (preparing for vLLM Triton op integration)
0.2.3
- Fixed
d_xbdefault value computation in configuration - Removed unsupported
device/dtypekwargs from RMSNorm calls
0.2.2
- Fixed vLLM 0.14+ compatibility issues with Mamba ops API
0.2.1
- Updated README, removed SFT model reference
0.2.0
- Initial public release with vLLM plugin system integration
License
Apache 2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file qwerky_vllm_models-0.2.18.tar.gz.
File metadata
- Download URL: qwerky_vllm_models-0.2.18.tar.gz
- Upload date:
- Size: 19.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
79b9a91db2b889e67db03a3d3159f42ca2d8da287c447eb91fc9ef3bdcb0d649
|
|
| MD5 |
0ba5c84e3066f4f2093a15f715eff23c
|
|
| BLAKE2b-256 |
1d262a40948886291cee4984f7bc127028d53831dda3fbb36976d85a9532a96f
|
File details
Details for the file qwerky_vllm_models-0.2.18-py3-none-any.whl.
File metadata
- Download URL: qwerky_vllm_models-0.2.18-py3-none-any.whl
- Upload date:
- Size: 19.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3338e4cb26c4b864caac8fa94b40cb1025ba524cf549e097b044858b75148e4e
|
|
| MD5 |
b6374663646249865848cf195d65e911
|
|
| BLAKE2b-256 |
22d8269c9e72a4acf7b82f5e263c3055e352d902017b58ea0834b172e75075ed
|