Skip to main content

vLLM plugin for Qwerky AI MambaInLlama hybrid models

Project description

Qwerky vLLM Models

vLLM plugin for serving Qwerky AI's MambaInLlama hybrid models without the --trust-remote-code flag.

Zero extra dependencies! Uses vLLM's native Mamba ops - no mamba_ssm or causal_conv1d compilation required.

Installation

pip install vllm qwerky-vllm-models

That's it! No compilation, no CUDA version conflicts.

Usage

After installing, serve Qwerky models with vLLM directly:

# No --trust-remote-code needed!
vllm serve QwerkyAI/Qwerky-Llama3.1-Mamba-8B-Llama3.3-70B-base-distill-sft --max-model-len 4096

The plugin automatically registers the model architectures with vLLM on import.

Supported Models

  • QwerkyAI/Qwerky-Llama3.1-Mamba-8B-Llama3.3-70B-base-distill-sft (8B, instruction-tuned)
  • QwerkyAI/Qwerky-Llama3.2-Mamba-3B-Llama3.3-70B-base-distill (3B, base)

How It Works

This package uses vLLM's plugin system (vllm.general_plugins entry point) to register the MambaInLlama model architecture when the package is installed. This means:

  1. No fork of vLLM needed
  2. No --trust-remote-code flag required
  3. Works with standard vLLM installation
  4. Uses vLLM's native Triton-accelerated Mamba kernels

Requirements

  • Python >= 3.10
  • vLLM >= 0.14.0
  • PyTorch >= 2.0.0

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

qwerky_vllm_models-0.2.0.tar.gz (15.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

qwerky_vllm_models-0.2.0-py3-none-any.whl (15.9 kB view details)

Uploaded Python 3

File details

Details for the file qwerky_vllm_models-0.2.0.tar.gz.

File metadata

  • Download URL: qwerky_vllm_models-0.2.0.tar.gz
  • Upload date:
  • Size: 15.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for qwerky_vllm_models-0.2.0.tar.gz
Algorithm Hash digest
SHA256 b0b02ed59b7c06fcd8115ddfaf6805d5d6128a37a55ac9e3827337e96b8bb46f
MD5 7593de829aea5cb397d7ef594da02048
BLAKE2b-256 266b25d7e8fe49d988429d5e5aba52f1c0fe80b679e98c34295da9f7abd1e898

See more details on using hashes here.

File details

Details for the file qwerky_vllm_models-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for qwerky_vllm_models-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 703f701cbdaff21c76189a57fec889a3812f441c0ad08e386e14c7168b853318
MD5 3fcaf561af05f88f357015edae9d367c
BLAKE2b-256 c3272ca1963c1dff413bf9537ce0440eba2f21d4119c21c07e1b14e7dc5ca4f4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page