An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
Project description
AutoGPTQ
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
English | 中文
The path to v1.0.0
Hi, fellow community members, long time no see! I'm sorry that I haven't been able to update this project more frequently due to personal reasons during this period. The past few weeks have been huge in terms of my career plans. Not long ago, I officially bid farewell to the startup team that I joined for two years after graduation. I'm very grateful to the leaders and colleagues of the team for their trust and guidance, which enabled me to grow rapidly in two years; at the same time, I'm also really grateful to the team for allowing me to use the internal A100 GPU server cluster free of charge since the start of the AutoGPTQ project to complete various experiments and performance evaluations. (Of course, it can no longer be used in the future, so it will mean a lot to me if there will be new hardware sponsorship!) In the past two years, I have served as an AI engineer in this team, responsible for the LLM based dialogue system's architecture design and develop. We had successfully launched a product called gemsouls, but unfortunately it has ceased operations. Now, the team is about to launch a new product called modelize, which is a LLM-native AI agent platform, where users can use multiple AI agents to build a highly automated team, allowing them to interact with each other in the workflow, collaborate to complete complex projects efficiently.
Getting back to the topic, I'm very excited to see that in the past few months, research on optimizing the inference performance of LLMs has made tremendous progress. Now we can not only complete the inference of LLMs on high-end GPUs efficiently, but also on CPUs and even edge devices. A series of technological advancements make me eager to make more contributions to the open source community. Therefore, I will first use about four weeks to gradually update AutoGPTQ to the v1.0.0 official version. During this period, there will also be 2~3 minor versions are released to allow users to experience performance optimization and new features timely. In my vision, by the time v1.0.0 is officially released, AutoGPTQ will be able to serve as an extendable and flexible quantization backend that supports all GPTQ-like methods and automatically quantize LLMs written by Pytorch. I detailed the development plan in this issue, feel free to drop in there for discussion and give your suggestions!
News or Update
- 2023-08-23 - (News) - 🤗 Transformers, optimum and peft have integrated
auto-gptq
, so now running and training GPTQ models can be more available to everyone! See this blog and it's resources for more details! - 2023-08-21 - (News) - Team of Qwen officially released 4bit quantized version of Qwen-7B based on
auto-gptq
, and provided a detailed benchmark results - 2023-08-06 - (Update) - Support exllama's q4 CUDA kernel to have at least 1.3x speed up for int4 quantized models when doing inference.
- 2023-08-04 - (Update) - Support RoCm so that AMD GPU users can use auto-gptq with CUDA extensions.
- 2023-07-26 - (Update) - An elegant PPL benchmark script to get results that can be fairly compared with other libraries such as
llama.cpp
. - 2023-06-05 - (Update) - Integrate with 🤗 peft to use gptq quantized model to train adapters, support LoRA, AdaLoRA, AdaptionPrompt, etc.
- 2023-05-30 - (Update) - Support download/upload quantized model from/to 🤗 Hub.
For more histories please turn to here
Performance Comparison
Inference Speed
The result is generated using this script, batch size of input is 1, decode strategy is beam search and enforce the model to generate 512 tokens, speed metric is tokens/s (the larger, the better).
The quantized model is loaded using the setup that can gain the fastest inference speed.
model | GPU | num_beams | fp16 | gptq-int4 |
---|---|---|---|---|
llama-7b | 1xA100-40G | 1 | 18.87 | 25.53 |
llama-7b | 1xA100-40G | 4 | 68.79 | 91.30 |
moss-moon 16b | 1xA100-40G | 1 | 12.48 | 15.25 |
moss-moon 16b | 1xA100-40G | 4 | OOM | 42.67 |
moss-moon 16b | 2xA100-40G | 1 | 06.83 | 06.78 |
moss-moon 16b | 2xA100-40G | 4 | 13.10 | 10.80 |
gpt-j 6b | 1xRTX3060-12G | 1 | OOM | 29.55 |
gpt-j 6b | 1xRTX3060-12G | 4 | OOM | 47.36 |
Perplexity
For perplexity comparison, you can turn to here and here
Installation
Quick Installation
You can install the latest stable release of AutoGPTQ from pip with pre-built wheels compatible with PyTorch 2.1:
- For CUDA 12.1:
pip install auto-gptq
- For CUDA 11.8:
pip install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
- For RoCm 5.6.1:
pip install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/rocm561/
- For RoCm 5.7.1:
pip install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/rocm571/
AutoGPTQ can be installed with the Triton dependency with pip install auto-gptq[triton]
in order to be able to use the Triton backend (currently only supports linux, no 3-bits quantization).
Install from source
Clone the source code:
git clone https://github.com/PanQiWei/AutoGPTQ.git && cd AutoGPTQ
A few packages are required in order to build from source: `pip install numpy gekko pandas`.
Then, install from source:
pip install -v .
You can set BUILD_CUDA_EXT=0
to disable pytorch extension building, but this is strongly discouraged as AutoGPTQ then falls back on a slow python implementation.
On RoCm systems
To install from source for AMD GPUs supporting RoCm, please specify the ROCM_VERSION
environment variable. Example:
ROCM_VERSION=5.6 pip install -v -e .
The compilation can be speeded up by specifying the PYTORCH_ROCM_ARCH
variable (reference) in order to build for a single target device, for example gfx90a
for MI200 series devices.
For RoCm systems, the packages rocsparse-dev
, hipsparse-dev
, rocthrust-dev
, rocblas-dev
and hipblas-dev
are required to build.
The following combinations are tested:
RoCm version | PyTorch version |
---|---|
5.4.2 | 2.0.1 |
5.6 | 2.1.0 |
5.7 | nightly (2.2.0.dev2023) |
Quick Tour
Quantization and Inference
warning: this is just a showcase of the usage of basic apis in AutoGPTQ, which uses only one sample to quantize a much small model, quality of quantized model using such little samples may not good.
Below is an example for the simplest use of auto_gptq
to quantize a model and inference after quantization:
from transformers import AutoTokenizer, TextGenerationPipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import logging
logging.basicConfig(
format="%(asctime)s %(levelname)s [%(name)s] %(message)s", level=logging.INFO, datefmt="%Y-%m-%d %H:%M:%S"
)
pretrained_model_dir = "facebook/opt-125m"
quantized_model_dir = "opt-125m-4bit"
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_dir, use_fast=True)
examples = [
tokenizer(
"auto-gptq is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm."
)
]
quantize_config = BaseQuantizeConfig(
bits=4, # quantize model to 4-bit
group_size=128, # it is recommended to set the value to 128
desc_act=False, # set to False can significantly speed up inference but the perplexity may slightly bad
)
# load un-quantized model, by default, the model will always be loaded into CPU memory
model = AutoGPTQForCausalLM.from_pretrained(pretrained_model_dir, quantize_config)
# quantize model, the examples should be list of dict whose keys can only be "input_ids" and "attention_mask"
model.quantize(examples)
# save quantized model
model.save_quantized(quantized_model_dir)
# save quantized model using safetensors
model.save_quantized(quantized_model_dir, use_safetensors=True)
# push quantized model to Hugging Face Hub.
# to use use_auth_token=True, Login first via huggingface-cli login.
# or pass explcit token with: use_auth_token="hf_xxxxxxx"
# (uncomment the following three lines to enable this feature)
# repo_id = f"YourUserName/{quantized_model_dir}"
# commit_message = f"AutoGPTQ model for {pretrained_model_dir}: {quantize_config.bits}bits, gr{quantize_config.group_size}, desc_act={quantize_config.desc_act}"
# model.push_to_hub(repo_id, commit_message=commit_message, use_auth_token=True)
# alternatively you can save and push at the same time
# (uncomment the following three lines to enable this feature)
# repo_id = f"YourUserName/{quantized_model_dir}"
# commit_message = f"AutoGPTQ model for {pretrained_model_dir}: {quantize_config.bits}bits, gr{quantize_config.group_size}, desc_act={quantize_config.desc_act}"
# model.push_to_hub(repo_id, save_dir=quantized_model_dir, use_safetensors=True, commit_message=commit_message, use_auth_token=True)
# load quantized model to the first GPU
model = AutoGPTQForCausalLM.from_quantized(quantized_model_dir, device="cuda:0")
# download quantized model from Hugging Face Hub and load to the first GPU
# model = AutoGPTQForCausalLM.from_quantized(repo_id, device="cuda:0", use_safetensors=True, use_triton=False)
# inference with model.generate
print(tokenizer.decode(model.generate(**tokenizer("auto_gptq is", return_tensors="pt").to(model.device))[0]))
# or you can also use pipeline
pipeline = TextGenerationPipeline(model=model, tokenizer=tokenizer)
print(pipeline("auto-gptq is")[0]["generated_text"])
For more advanced features of model quantization, please reference to this script
Customize Model
Below is an example to extend `auto_gptq` to support `OPT` model, as you will see, it's very easy:
from auto_gptq.modeling import BaseGPTQForCausalLM
class OPTGPTQForCausalLM(BaseGPTQForCausalLM):
# chained attribute name of transformer layer block
layers_block_name = "model.decoder.layers"
# chained attribute names of other nn modules that in the same level as the transformer layer block
outside_layer_modules = [
"model.decoder.embed_tokens", "model.decoder.embed_positions", "model.decoder.project_out",
"model.decoder.project_in", "model.decoder.final_layer_norm"
]
# chained attribute names of linear layers in transformer layer module
# normally, there are four sub lists, for each one the modules in it can be seen as one operation,
# and the order should be the order when they are truly executed, in this case (and usually in most cases),
# they are: attention q_k_v projection, attention output projection, MLP project input, MLP project output
inside_layer_modules = [
["self_attn.k_proj", "self_attn.v_proj", "self_attn.q_proj"],
["self_attn.out_proj"],
["fc1"],
["fc2"]
]
After this, you can use OPTGPTQForCausalLM.from_pretrained
and other methods as shown in Basic.
Evaluation on Downstream Tasks
You can use tasks defined in auto_gptq.eval_tasks
to evaluate model's performance on specific down-stream task before and after quantization.
The predefined tasks support all causal-language-models implemented in 🤗 transformers and in this project.
Below is an example to evaluate `EleutherAI/gpt-j-6b` on sequence-classification task using `cardiffnlp/tweet_sentiment_multilingual` dataset:
from functools import partial
import datasets
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
from auto_gptq.eval_tasks import SequenceClassificationTask
MODEL = "EleutherAI/gpt-j-6b"
DATASET = "cardiffnlp/tweet_sentiment_multilingual"
TEMPLATE = "Question:What's the sentiment of the given text? Choices are {labels}.\nText: {text}\nAnswer:"
ID2LABEL = {
0: "negative",
1: "neutral",
2: "positive"
}
LABELS = list(ID2LABEL.values())
def ds_refactor_fn(samples):
text_data = samples["text"]
label_data = samples["label"]
new_samples = {"prompt": [], "label": []}
for text, label in zip(text_data, label_data):
prompt = TEMPLATE.format(labels=LABELS, text=text)
new_samples["prompt"].append(prompt)
new_samples["label"].append(ID2LABEL[label])
return new_samples
# model = AutoModelForCausalLM.from_pretrained(MODEL).eval().half().to("cuda:0")
model = AutoGPTQForCausalLM.from_pretrained(MODEL, BaseQuantizeConfig())
tokenizer = AutoTokenizer.from_pretrained(MODEL)
task = SequenceClassificationTask(
model=model,
tokenizer=tokenizer,
classes=LABELS,
data_name_or_path=DATASET,
prompt_col_name="prompt",
label_col_name="label",
**{
"num_samples": 1000, # how many samples will be sampled to evaluation
"sample_max_len": 1024, # max tokens for each sample
"block_max_len": 2048, # max tokens for each data block
# function to load dataset, one must only accept data_name_or_path as input
# and return datasets.Dataset
"load_fn": partial(datasets.load_dataset, name="english"),
# function to preprocess dataset, which is used for datasets.Dataset.map,
# must return Dict[str, list] with only two keys: [prompt_col_name, label_col_name]
"preprocess_fn": ds_refactor_fn,
# truncate label when sample's length exceed sample_max_len
"truncate_prompt": False
}
)
# note that max_new_tokens will be automatically specified internally based on given classes
print(task.run())
# self-consistency
print(
task.run(
generation_config=GenerationConfig(
num_beams=3,
num_return_sequences=3,
do_sample=True
)
)
)
Learn More
tutorials provide step-by-step guidance to integrate auto_gptq
with your own project and some best practice principles.
examples provide plenty of example scripts to use auto_gptq
in different ways.
Supported Models
you can use
model.config.model_type
to compare with the table below to check whether the model you use is supported byauto_gptq
.for example, model_type of
WizardLM
,vicuna
andgpt4all
are allllama
, hence they are all supported byauto_gptq
.
model type | quantization | inference | peft-lora | peft-ada-lora | peft-adaption_prompt |
---|---|---|---|---|---|
bloom | ✅ | ✅ | ✅ | ✅ | |
gpt2 | ✅ | ✅ | ✅ | ✅ | |
gpt_neox | ✅ | ✅ | ✅ | ✅ | ✅requires this peft branch |
gptj | ✅ | ✅ | ✅ | ✅ | ✅requires this peft branch |
llama | ✅ | ✅ | ✅ | ✅ | ✅ |
moss | ✅ | ✅ | ✅ | ✅ | ✅requires this peft branch |
opt | ✅ | ✅ | ✅ | ✅ | |
gpt_bigcode | ✅ | ✅ | ✅ | ✅ | |
codegen | ✅ | ✅ | ✅ | ✅ | |
falcon(RefinedWebModel/RefinedWeb) | ✅ | ✅ | ✅ | ✅ |
Supported Evaluation Tasks
Currently, auto_gptq
supports: LanguageModelingTask
, SequenceClassificationTask
and TextSummarizationTask
; more Tasks will come soon!
Running tests
Tests can be run with:
pytest tests/ -s
Acknowledgement
- Specially thanks Elias Frantar, Saleh Ashkboos, Torsten Hoefler and Dan Alistarh for proposing GPTQ algorithm and open source the code.
- Specially thanks qwopqwop200, for code in this project that relevant to quantization are mainly referenced from GPTQ-for-LLaMa.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Hashes for auto_gptq-0.6.0-cp311-cp311-win_amd64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 43316503a9360320e0f2d3e2545c9b06513e10f538afd9e6ff26e7c1216f7292 |
|
MD5 | cfc4bbf415574f0cbed0195d22e92351 |
|
BLAKE2b-256 | 9a0a20e9fa396fc48a6a03eb685782c231124199bb624e78c8ae593414acf0e9 |
Hashes for auto_gptq-0.6.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | f7b61f26484454b41238cf64eae467b928a621abfc9217474c88aac9c08470e6 |
|
MD5 | ab5e5307ce77b40c9061b860013ed9a8 |
|
BLAKE2b-256 | 44daae0d15fb55681ef53ea4e13b96c0310ce56fab1b1077d187a945f5aa7703 |
Hashes for auto_gptq-0.6.0-cp310-cp310-win_amd64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 45fa4fe6db35101893f8138864f365d52717d831a12e5a48e2a03a3cce8fdb28 |
|
MD5 | 33b35ea490d25f763f82aab5a9db468e |
|
BLAKE2b-256 | 7fcb988efd32724136fccf014c3b9040e065ef78d74998ff93c931bfa2b0936a |
Hashes for auto_gptq-0.6.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 823061f742e6372794a0c760f870794c4513ebbc340f1bbf933878aecb1cfab2 |
|
MD5 | a530509a46dc778d974b6f6bfb56d739 |
|
BLAKE2b-256 | 09b2c964b7f286ce5f782c1be0b46700091daa60a121b41e06d9a59047b45e57 |
Hashes for auto_gptq-0.6.0-cp39-cp39-win_amd64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ec6a6dbdee0fba36c2f9a2108eab09c6c07491c7d7d1c937562285586072b9df |
|
MD5 | 0af8c663e5a492a3e53af0983a4b8d5a |
|
BLAKE2b-256 | eeeae8c070bcb4c80bc9a62c83240f9d8e252bb6396f8f984bff78bf17182459 |
Hashes for auto_gptq-0.6.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | dead798d57aae3abe12c76e4ba17d75f969a8b9915fb23a906daf658e765d2cd |
|
MD5 | 8cb6a45c52fa681671cbf3eda10a0ffb |
|
BLAKE2b-256 | 168dd75f865da732e89e5c0859e0dceb477dda8bdbb4b69ca34ca65075721fa4 |
Hashes for auto_gptq-0.6.0-cp38-cp38-win_amd64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 28e1b40d010cf7a4f6cc0b86af293020a9bcafa32987fad52d95eb0de852784d |
|
MD5 | 59eaf67b616d6affd60f45f01f796985 |
|
BLAKE2b-256 | 29d05da0724cdab2d909c7429723686a22df727a65e3dfdad1e03b7b3374665f |
Hashes for auto_gptq-0.6.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3d57fe5076ad36a39e80c432125233edc756044cdd0c4f26183c3ef04aa05bf1 |
|
MD5 | 986fb34c90cf72d673de14f9a5bcaecc |
|
BLAKE2b-256 | a5ad6811dc63a4b51ad23c6e5547f1622fe433c50d3ed6f254674f7fc51178d5 |