Skip to main content

TurboQuant KV cache compression for vLLM — fused Triton kernels, 3.76x compression, 3.7x faster decode on RTX 4090

Project description

PyPI Python License Ruff docs vetted

turboquant-vllm

TurboQuant KV cache compression as a drop-in vLLM plugin. 3.76x compression, near-identical output quality, one CLI flag to enable.

First open-source TurboQuant implementation — paper to working vLLM plugin in 72 hours.

Install

pip install turboquant-vllm[vllm]

Or with uv:

uv add turboquant-vllm --extra vllm

Quick Start (vLLM)

The TQ4 attention backend registers automatically via vLLM's plugin system:

vllm serve allenai/Molmo2-4B --attention-backend CUSTOM

No code changes required. The plugin compresses KV cache pages to 68 bytes/token/head (vs 256 bytes FP16).

Quick Start (HuggingFace)

from transformers import DynamicCache
from turboquant_vllm import CompressedDynamicCache

cache = DynamicCache()
compressed = CompressedDynamicCache(cache, head_dim=128, bits=4)

# Pass cache (not the wrapper) to model.generate()
# Compression happens transparently on every cache.update()

Benchmark Results

Molmo2-4B (bfloat16, 36 layers) on RTX 4090 — 11K visual tokens from 2fps video + 256 generation tokens:

Mode KV Cache Compression Output Quality Overhead
FP16 baseline 1,639 MiB 1.0x -- --
TQ3 (3-bit) 845 MiB 1.94x ~95% cosine similarity 2.35x
TQ4 (full dequant) 435 MiB 3.76x ~97% cosine similarity 3.36x
TQ4 (incremental) 435 MiB 3.76x ~97% cosine, 100+ matching tokens 1.78x

How It Works

Implements Google's TurboQuant algorithm (ICLR 2026):

  1. Random orthogonal rotation maps each KV vector onto coordinates that follow a known Beta distribution
  2. Lloyd-Max scalar quantization finds optimal centroids for that distribution at 3-4 bits per coordinate
  3. Nibble packing stores two 4-bit indices per byte for 3.76x compression
  4. Incremental dequantization only decompresses new tokens each decode step, keeping overhead at 1.78x

What Gets Compressed

Data Compressed Format
Key cache vectors Yes uint8 nibble-packed indices + fp32 norms
Value cache vectors Yes uint8 nibble-packed indices + fp32 norms
Rotation matrices No Generated once per layer from fixed seed
Lloyd-Max codebook No Computed once, shared across all layers

Roadmap

  • Core TurboQuant algorithm (Lloyd-Max, MSE quantizer, compressors)
  • CompressedDynamicCache with incremental dequantization
  • vLLM TQ4 attention backend plugin
  • Fused Triton kernels (17.8x Q@K^T speedup, Flash Attention fusion)
  • Container image with turboquant-vllm baked in
  • Full Flash Attention fusion with fp32 online softmax
  • SageAttention-style INT8 path

Documentation

  • Architecture -- Module map, dependency DAG, data flow diagrams
  • Roadmap -- Detailed implementation status and experiment results
  • Development Guide -- Setup, build, test, lint commands

Citation

@inproceedings{zandieh2025turboquant,
  title={TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate},
  author={Zandieh, Amir and Han, Insu and Daliri, Majid and Karbasi, Amin},
  booktitle={International Conference on Learning Representations (ICLR)},
  year={2025}
}

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

turboquant_vllm-1.1.0.tar.gz (45.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

turboquant_vllm-1.1.0-py3-none-any.whl (59.7 kB view details)

Uploaded Python 3

File details

Details for the file turboquant_vllm-1.1.0.tar.gz.

File metadata

  • Download URL: turboquant_vllm-1.1.0.tar.gz
  • Upload date:
  • Size: 45.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.2 {"installer":{"name":"uv","version":"0.11.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for turboquant_vllm-1.1.0.tar.gz
Algorithm Hash digest
SHA256 0e464cc343a79dcaf4d79ea97e90b6fe9103e532c7296ca27f29d56983fd5055
MD5 72342d4a5c5369b1f07691ce1224cb52
BLAKE2b-256 d1c3db772c10dc3f020f59bc81b288c85595e01e37f5af60430ba3c22769e614

See more details on using hashes here.

File details

Details for the file turboquant_vllm-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: turboquant_vllm-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 59.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.2 {"installer":{"name":"uv","version":"0.11.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for turboquant_vllm-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c82a4f4b5a3a04d31f4dcb86e7f0532b593b6a6a20569c8b71f2e0467081407e
MD5 374799d1b6595cca092e155b3e959195
BLAKE2b-256 09c22fe801cbf0329dbbfd6c6eb3e66c9fae5d9c33512de43b773ed9cfb12dc4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page