Skip to main content

FADE: Frequency-Adaptive Decay Encoding — attention-aware tiered KV cache compression for LLM inference.

Project description

FADE

CI License Python 3.10+ PyPI Open In Colab

Frequency-Adaptive Decay Encoding — drop-in KV cache compression for HuggingFace transformers. Shrinks the KV cache 3–5× with near-baseline quality.

from fade import FadeConfig, create_tiered_cache

cache = create_tiered_cache(model, config=FadeConfig.safe())
out = model.generate(input_ids, past_key_values=cache, max_new_tokens=256)

Works with model.generate() — greedy, sampling, beam search. No manual decode loop needed.

How it works

Tokens live in tiers based on age and attention importance:

Tier What's stored When
FP16 Full precision First N_SINK tokens + last RECENT_WINDOW tokens
INT4 Bit-packed 4-bit Middle-aged tokens (the bulk of the cache)
INT2 Grouped 2-bit Optional deeper compression (lossy)
PQ Product-quantized codes ~2 bits/element via trained codebook (Phase 3)
Evicted Nothing Dropped when INT4_BUDGET is finite

When tokens are evicted, surviving K tensors are un-RoPE'd at old positions and re-RoPE'd with contiguous StreamingLLM positions.

Install

From PyPI:

pip install fade-kv
pip install fade-kv[server]  # adds fade-server CLI

From source (development):

python -m venv .venv
.\.venv\Scripts\Activate.ps1
pip install torch  # match your CUDA version: https://pytorch.org/get-started/locally/
pip install -e ".[dev]"

Optional extras: pip install fade-kv[cuda] (accelerate), fade-kv[eval] (datasets), fade-kv[codebook] (scikit-learn for PQ).

Quick start

Presets

from fade import FadeConfig, create_tiered_cache

# Safe: ~3-4x compression, 100% greedy match. No eviction.
cache = create_tiered_cache(model, config=FadeConfig.safe())

# Balanced: ~5x compression with H2O eviction.
cache = create_tiered_cache(model, config=FadeConfig.balanced())

# Aggressive: ~7-8x compression. Validate on your workload first.
cache = create_tiered_cache(model, config=FadeConfig.aggressive())

Custom config

cache = create_tiered_cache(model, config=FadeConfig(
    phase="2",
    n_sink=4,
    recent_window=64,
    int4_budget=400,
    eviction_policy="h2o",       # "h2o", "ema", "position", or "learned"
    middle_k_bits=4,             # K stays INT4 (outlier-sensitive)
    middle_v_bits=2,             # V at INT2 (~30% more compression)
))

Rotated 2-bit backend (~6× compression)

from fade.backends import get_backend

cache = create_tiered_cache(model, config=FadeConfig.safe(),
    quant_backend=get_backend("rotated", head_dim=64, bits=2))

Random orthogonal rotation spreads per-channel outliers before quantization, making 2-bit viable. Uses native PyTorch — no external dependencies.

Manual decode with tier reassignment

from fade.patch import forward_with_tracking, load_model
from fade.policy import reassign_tiers
from fade.tracker import AttentionTracker

model, tokenizer = load_model("Qwen/Qwen2.5-3B-Instruct", attn_impl="auto", need_attentions=True)
cache = create_tiered_cache(model, config=FadeConfig.balanced())
tracker = AttentionTracker(num_layers=model.config.num_hidden_layers)

out = forward_with_tracking(model, input_ids, cache, tracker=tracker)
for step in range(max_tokens):
    out = forward_with_tracking(model, next_token, cache, tracker=tracker)
    if (step + 1) % 64 == 0:
        reassign_tiers(cache, tracker, model.config.num_hidden_layers)

Eviction policies

Policy Quality Speed Needs attention?
h2o Best Normal Yes (prefill only)
ema Good Normal Yes (decode only)
position Fair Fast No
learned Good* Fast No

*Learned policy requires a trained checkpoint: python scripts/train_eviction_mlp.py

Supported models

FADE auto-detects the RoPE scheme from the model config:

  • Qwen2 / Qwen3 — vanilla RoPE, GQA
  • Llama / Llama-3.1 — vanilla + frequency-dependent scaling
  • Mistral — vanilla RoPE, sliding-window
  • Phi-3 — vanilla RoPE
  • Gemma-2 — vanilla RoPE
  • Gemma 4 — proportional RoPE with partial_rotary_factor + per-layer-type dispatch
  • Falcon — ALiBi (non-RoPE; re-RoPE is a no-op)
  • Qwen 3.5 / 3.6 — hybrid DeltaNet + softmax attention. FADE auto-detects layer_types and skips DeltaNet layers (only full-attention layers are tiered).

RoPE scaling types: linear, llama3, ntk, dynamic, yarn, proportional. Non-RoPE models (ALiBi, Bloom, MPT) work via the NoRope sentinel.

Batching

Two modes:

  • Shared-tier (default): all rows share positions and tier decisions. For lockstep decoding.
  • Per-sequence (apply_tier_assignment_per_sequence): each row gets independent [B, S] tiers. For continuous-batching where sequences diverge.

Performance

  • Fused INT4 FlashAttention kernel — single Triton kernel reads packed INT4 K/V, computes attention with online softmax, writes fp16 output. Never materializes fp16 K/V. 4.9× faster than the old dequant+SDPA path, within 1.4× of pure fp16 FlashAttention on RTX 3060.
  • Pre-allocated FP16 buffer — doubling buffer eliminates torch.cat on every decode step.
  • torch.compilecache.enable_compile() wraps _materialize between graph-break boundaries.
  • Dequant-cache age evictioncache.max_dequant_age = N periodically refreshes cached dequant buffers.
  • Benchmarkspython benchmarks/tps.py (decode throughput), python benchmarks/divergence.py (quality).
# Use the fused kernel directly:
from fade.kernels.fused_int4_attn import fused_int4_sdpa
out = fused_int4_sdpa(q, k_packed, k_scale, v_packed, v_scale)

Inference server

OpenAI-compatible API with automatic tier management:

fade-server --model Qwen/Qwen2.5-0.5B-Instruct --preset balanced --port 8000
curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d "{\"messages\":[{\"role\":\"user\",\"content\":\"Hello!\"}],\"max_tokens\":100}"

Endpoints: /v1/chat/completions (greedy + sampling), /v1/models, /health.

Checkpointing

sd = cache.cache_state_dict()
torch.save(sd, "cache.pt")
cache.load_cache_state_dict(torch.load("cache.pt"))

Observability

from fade.telemetry import JsonlExporter, attach_telemetry
attach_telemetry(cache, JsonlExporter("events.jsonl"))

Debug dump: cache.dump_debug("snapshot.json")

PQ codebook

from fade.codebook import PQCodebook
cb = PQCodebook.train(calibration_vectors, sub_dim=32, num_centroids=256)
cache.set_codebooks(cb)  # enables TIER_PQ in tier assignment

Train codebooks from a real model: python scripts/train_codebook.py

Results

Config Model KV cache Compression
INT4 symmetric Qwen2.5-0.5B, 2K tok 6.78 MiB 3.5×
Rotated 4-bit Qwen2.5-0.5B, 2K tok 6.78 MiB 3.5× (better quality)
Rotated 2-bit Qwen2.5-0.5B, 2K tok 3.88 MiB 6.2×
Phase 2 H2O Qwen2.5-3B, 595 tok 6.3 MiB (with eviction)

Project layout

fade/
  cache.py           # TieredKVCache with 5 tiers (FP16/INT4/INT2/PQ/evict)
  config.py          # FadeConfig with presets
  quant.py           # INT4/INT2 quantization + bit-packing
  rope.py            # 7 RoPE schemes incl. Gemma 4 proportional
  policy.py          # Tier assignment: h2o, ema, position
  learned_policy.py  # Learned eviction MLP
  tracker.py         # AttentionTracker (per-layer EMA)
  patch.py           # load_model, create_tiered_cache, forward_with_tracking
  codebook.py        # PQ codebook train/encode/decode
  telemetry.py       # Structured telemetry + exporters
  kernels/           # Fused INT4 FlashAttention kernel + unpack kernel + fallback
  serving/           # vLLM / SGLang adapter stubs
  eval/              # Perplexity, needle, quality suite
examples/            # quickstart.py
experiments/         # run_baseline.py, run_tiered.py
benchmarks/          # tps.py, divergence.py
scripts/             # train_eviction_mlp.py, train_codebook.py
tests/               # 136 tests, all CPU, no downloads

Gotchas

  1. Attention impl: eager only needed for H2O prefill. Use load_model(attn_impl="auto").
  2. Transformers version: verified on 4.45 and 5.3. Weekly canary CI runs against transformers@main.
  3. Memory: use cache.compressed_storage_bytes(), not nvidia-smi.
  4. RoPE precision: all math in float32, cast through model dtype to match rounding.
  5. Hybrid models: Qwen 3.5/3.6 DeltaNet layers are auto-skipped — only full-attention layers are tiered.
  6. Triton kernels: fused attention via fused_int4_sdpa(), unpack-only via int4_sdpa(force_triton=True). Run check_fused_parity() to validate on your hardware.

License

Apache-2.0. See LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fade_kv-0.4.0.tar.gz (88.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fade_kv-0.4.0-py3-none-any.whl (70.1 kB view details)

Uploaded Python 3

File details

Details for the file fade_kv-0.4.0.tar.gz.

File metadata

  • Download URL: fade_kv-0.4.0.tar.gz
  • Upload date:
  • Size: 88.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for fade_kv-0.4.0.tar.gz
Algorithm Hash digest
SHA256 8b8f780307ccca734b88506d4d9dcad3604d1ae6da231730f0f20548d09d8bdd
MD5 f576fe45493413fe131a549fa582d2f3
BLAKE2b-256 c97823e3afa9dd0ed011a45b7437b069b27df594db9018faa937e4c3cc87a32d

See more details on using hashes here.

Provenance

The following attestation bundles were made for fade_kv-0.4.0.tar.gz:

Publisher: ci.yml on Omodaka9375/fade

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file fade_kv-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: fade_kv-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 70.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for fade_kv-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c2d5d70dfa0f7fad2015fe49860856447f482cb2d09e37bf4efccfb8a605e9d7
MD5 62af6d3dcf0e57b8a69da5a4b8a588c8
BLAKE2b-256 8cc6e6ea582a8a5a3a2617198c09e8da4d96b9d76dad3aa82804f4f1654cf2ca

See more details on using hashes here.

Provenance

The following attestation bundles were made for fade_kv-0.4.0-py3-none-any.whl:

Publisher: ci.yml on Omodaka9375/fade

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page