Skip to main content

Causal Salience-Aware Quantization - Mixed precision LLM weights with self-speculative decoding

Project description

CSAQ: Causal Salience-Aware Quantization

Causal Salience-Aware Quantization (CSAQ) is a high-performance LLM weight quantization engine designed to hit perfectly defined fractional bit-budgets (e.g., exactly 4.0 bits/weight) by utilizing mixed-precision formats. Unlike magnitude-based proxies like AWQ or GPTQ, CSAQ uses first-order Taylor approximations to measure actual causal salience combined with advanced co-activation interaction graphs.

Features

  • Multi-Bit Mixed Precision: Replaces static quantization settings. Automatically distributes available bit thresholds (1, 2, 4, 8, 16) based heavily on impact, significantly minimizing degradation on critical model pathways.
  • Top-K Jaccard Co-Activation Graphs: Discovers sets of weights that commonly fire together using "Atomic Cliques".
  • Shared-Scale Architecture: Assigns low-precision bits to trailing follower weights by recycling the Quantization Scaling Factors (S) and Zero-Points (Z) of the clique's high-salience Leader, aggressively compressing parameters without losing scale context.
  • Constant Memory Footprint: Tracks Jaccard activation sparsification using an online bit-vector union/intersection accumulator, avoiding disastrous Out-Of-Memory (OOM) errors during calibration.

Installation

Install using pip:

pip install csaq-quant

Quick Start

Python API

You can programmatically apply CSAQ using the core export quantize and managing constraints with CSAQConfig:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from csaq import quantize, CSAQConfig, build_calibration_data

# 1. Load your standard HF LLM
model_id = "Qwen/Qwen1.5-0.5B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cpu")

# 2. Extract representative calibration data
calib_data = build_calibration_data(tokenizer, n=32, seq_len=128)

# 3. Configure fractional Bit-Budget and allowed bits (e.g., target exactly 4 bits on average)
config = CSAQConfig(
    target_bits=4.0, 
    bit_options=[1, 2, 4, 8, 16],
    clique_threshold=0.85
)

# 4. Fire the Quantization Pipeline
quantized_model, info = quantize(
    model=model, 
    calib_data=calib_data, 
    config=config, 
    verbose=True
)

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

csaq_quant-0.3.2-py3-none-any.whl (18.9 kB view details)

Uploaded Python 3

File details

Details for the file csaq_quant-0.3.2-py3-none-any.whl.

File metadata

  • Download URL: csaq_quant-0.3.2-py3-none-any.whl
  • Upload date:
  • Size: 18.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.4

File hashes

Hashes for csaq_quant-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 3eb23db849df6bc27650348f80e02388a1ceb1a3937dc19b12a033573e83c030
MD5 6e3725285b9685a0025a1d4a696fd9e0
BLAKE2b-256 66633b7c0c6b639437b611f7932cae6f3f7174bb5b2ccb8fddeb5bc3725df2ab

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page