Skip to main content

Repository of AutoRound: Advanced Weight-Only Quantization Algorithm for LLMs

Project description

AutoRound

Advanced Weight-Only Quantization Algorithm for LLMs

python version license

AutoRound is an advanced weight-only quantization algorithm for low-bits LLM inference. It's tailored for a wide range of models and consistently delivers noticeable improvements, often significantly outperforming SignRound with the cost of more tuning time for quantization.

Prerequisites

  • Python 3.9 or higher

Installation

Build from Source

pip install -r requirements.txt
python setup.py install

Usage

On CPU/GPU

from transformers import AutoModelForCausalLM, AutoTokenizer
from auto_round import AutoRound

model_name = "meta-llama/Llama-2-7b-hf"
model = AutoModelForCausalLM.from_pretrained(
            model_name, low_cpu_mem_usage=True, torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
bits, group_size, scheme = 4, 128, "asym"
autoround = AutoRound(model, tokenizer, bits=bits, group_size=group_size, scheme=scheme)
autoround.quantize()

# Intel CPU Inference, For now only support llama, mistral and gpt-j.
# then follow ITREX(https://github.com/intel/intel-extension-for-transformers/tree/main/intel_extension_for_transformers/llm/runtime/neural_speed) to load the model and do inference
# currently please install neural-speed (https://github.com/intel/neural-speed) from source
output_dir = "./tmp_autoround"
autoround.export(output_dir)

from intel_extension_for_transformers.transformers import AutoModelForCausalLM, WeightOnlyQuantConfig

woq_config = WeightOnlyQuantConfig(group_size=group_size, scheme=scheme, use_autoround=True)  ##only supports 4 bits currently
prompt = "Once upon a time, a little girl"

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
inputs = tokenizer(prompt, return_tensors="pt").input_ids
model = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=woq_config, trust_remote_code=True,device="cpu")
outputs = model.generate(inputs, max_new_tokens=30)

Tuning on Intel Gaudi2

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "meta-llama/Llama-2-7b-hf"
model = AutoModelForCausalLM.from_pretrained(
            model_name, low_cpu_mem_usage=True, torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
bits, group_size, scheme = 4, 128, "asym"

# need to load model first, then import
from auto_round import AutoRound
autoround = AutoRound(model, tokenizer, bits=bits, group_size=group_size, scheme=scheme,
                      device="hpu", scale_dtype="bf16", amp=False)
autoround.quantize()
Detailed Hyperparameters
  • model: The PyTorch model to be quantized.

  • tokenizer: An optional tokenizer for processing input data. If none is provided, a dataloader must be supplied.

  • bits (int): Number of bits for quantization (default is 4).

  • group_size (int): Size of the quantization group (default is 128).

  • scheme (str): The quantization scheme (sym/asym) to be used (default is "asym").

  • use_quant_input (bool): Whether to use the output of the previous quantized block as the input for the current block (default is True).

  • enable_minmax_tuning (bool): Whether to enable weight min-max tuning (default is True).

  • iters (int): Number of tuning iterations (default is 200).

  • lr (float): The learning rate for rounding value (default is None, it will be set to 1.0/iters automatically).

  • minmax_lr (float): The learning rate for min-max tuning (default is None, it will be set to lr automatically).

  • n_samples (int): Number of samples for tuning (default is 512).

  • seqlen (int): Data length of the sequence for tuning (default is 2048).

  • bs (int): Batch size for training (default is 8).

  • amp (bool): Whether to use automatic mixed precision (default is True).

  • n_blocks (int): Packing several blocks as one for tuning together (default is 1).

  • gradient_accumulate_steps (int): Number of gradient accumulation steps (default is 1).

  • low_gpu_mem_usage (bool): Whether to save GPU memory at the cost of a little tuning time (default is True).

  • dataset_name (str): The default dataset name for tuning (default is "NeelNanda/pile-10k").

  • dataset_split (str): The split of the dataset to be used for tuning (default is "train").

  • dataloader: The dataloader for tuning data.

  • weight_config (dict): Configuration for weight quantization (default is an empty dictionary), mainly for mixed bits or mixed precision.

  • device: The device to be used for tuning (default is "cuda:0").

Validated Models

For wikitext2/ptb-new/c4-new ppl, we follow the code of gptq and set the sequence length to 2048. For lm-eval wikitext ppl, we adopt lm-eval. The quantization configure is W4G128.

Model Method Acc AVG. MMLU Lamb. Hella. Wino. Piqa Truth. Open. Boolq RTE ARC-e ARC-c. wikitext2 ppl ptb_new ppl c4_new ppl lm_eval wikitext ppl
Intel/neural-chat-7b-v3 FP16 67.92 61.13 73.03 66.39 76.40 81.01 47.37 38.8 86.97 75.81 82.66 57.51 6.00 48.96 9.65 -
Ours 66.90 60.56 72.19 65.28 75.37 81.18 46.76 36.0 86.91 73.29 81.73 56.66 6.21 59.78 10.01 -
Ours iters=1K,use_quant_input=False, minmax_lr=0.002 67.70 60.57 73.74 65.62 77.43 80.85 47.61 36.8 86.94 75.09 82.66 57.34 6.17 59.12 9.83 -
mistralai/Mixtral-8x7B-v0.1 BF16 67.16 69.83 78.44 64.89 76.40 82.43 34.15 35.40 84.98 71.12 84.22 56.91 3.84 19.22 7.41 -
Ours 65.98 68.90 78.11 64.31 74.27 82.10 30.97 34.20 84.57 67.87 83.96 56.57 4.08 354 7.56 -
Ours iters=1K,use_quant_input=False 66.78 68.68 78.61 64.40 76.56 81.99 32.56 34.80 85.96 70.76 83.96 56.31 3.99 17.65 7.52 -
microsoft/phi-2 FP16 61.80 56.40 62.78 55.83 75.77 78.67 31.21 40.40 83.36 62.45 80.05 52.90 9.71 18.16 14.12 11.05
Ours 61.67 54.57 61.32 55.04 76.48 78.89 29.74 40.60 83.24 66.43 79.76 52.30 9.98 18.67 14.39 11.37
Ours iters=1K,use_quant_input=False 61.47 55.41 61.77 54.92 76.40 78.29 31.09 40.0 83.24 63.54 79.29 52.22 9.97 18.63 14.37 11.35

We provide a comprehensive analysis with other methods in our accuracy data section. Notably, our approach has outperformed GPTQ with a score of 30/32 and AWQ with a score of 27/32 across llamv1/llamav2/mistral-7b on W4G-1, W4G128, W3G128, W2G128. And the tuning costs are comparable.

Tips

1 Consider increasing tuning steps to achieve better results, albeit with increased tuning time. Additionally, setting 'use_quant_input' to False or adjusting 'minmax_lr' to 2.0/iters has been observed to occasionally yield improved results.

2 Leverage AutoGPTQ to run the model on GPU

from transformers import AutoModelForCausalLM, AutoTokenizer
from auto_round import AutoRound

model_name = "facebook/opt-125m"
model = AutoModelForCausalLM.from_pretrained(
            model_name, low_cpu_mem_usage=True, torch_dtype="auto", trust_remote_code=True
        )
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)

autoround = AutoRound(model, tokenizer, bits=4, group_size=128, scheme="asym")
autoround.quantize()

## export to autogptq
# please install auto-gptq https://github.com/AutoGPTQ/
output_dir = "/path/to/quantized_model"
autoround.export(output_dir, target="auto_gptq", use_triton=True)
# then follow auto-gptq to load the model and inference

Examples

Quantization has been enabled for various large language models. Please refer to the example readme for details.

Reference

If you find SignRound useful for your research, please cite our paper:

@article{cheng2023optimize,
  title={Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs},
  author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao},
  journal={arXiv preprint arXiv:2309.05516},
  year={2023}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

auto_round-0.0-py3-none-any.whl (44.6 kB view details)

Uploaded Python 3

File details

Details for the file auto_round-0.0-py3-none-any.whl.

File metadata

  • Download URL: auto_round-0.0-py3-none-any.whl
  • Upload date:
  • Size: 44.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3

File hashes

Hashes for auto_round-0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1d67069e4a36d5f0a17fc29e136fb0b8d264e42b25291a63a53254885afbd3ca
MD5 0b561226d0ab6c6d6c159fefbd604c8b
BLAKE2b-256 3daa64a5fe1898425586bcaefd0371a153779a2e88f67545127c8c5d7efd9e52

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page