FMS Acceleration for PeFT
Project description
FMS Acceleration for Accelerated PeFT Techniques
Currently only supports LoRA-related techniques, but more are in the pipeline to be added:
Plugins
| Plugin | Description | Depends | Loading | Augmentation | Callbacks |
|---|---|---|---|---|---|
| autogptq | Loads 4bit GPTQ-LoRA with quantized GPTQ as base | AutoGPTQ | ✅ | ✅ | ✅ |
| bnb | Loads 4bit QLoRA with quantized bitsandbytes Linear4 | Huggingface bitsandbytes |
✅ | ✅ | ✅ |
Key Points
- fix upcasting (resulting in slowdown) issue for
bnbplugin, originally discovered by inventors of Unsloth. NOTE: we recommend using mixed precision when using 4bit quant for better performance, as per our benchmarks. bnbproperly configured to work with FSDP following this guide.triton_v2kernels are not yet properly integrated into huggingface optimum.triton_v2kernels are the only 4bit kernels that work for training.
GPTQ-LORA's AutoGPTQ - Current Implementation vs Legacy Implementation
GPTQ-LORA depends on an AutoGPTQ backend to run. There are 2 backend options
- Current Implementation
- This is an extracted local subset from ModelCloud's refactored fork.
- It removes redundant code to simplify build and installation of the plugin
- Legacy Implementation
-
This requires building the package from the official AutoGPTQ repository
-
To replicate this implementation, follow the installation below
- The legacy implementation of GPTQ-LORA uses an external AutoGPTQ package, you must ensure the specific commit is installed
pip install git+https://github.com/AutoGPTQ/AutoGPTQ.git@ea829c7bbe83561c2b1de26795b6592992373ef7 - To construct the plugin, in the configuration object that is passed to the plugin - set
use_external_lib: True(otherwise defaults to use the local AutoGPTQ package)
peft: quantization: auto_gptq: kernel: triton_v2 from_quantized: True use_external_lib: True - The legacy implementation of GPTQ-LORA uses an external AutoGPTQ package, you must ensure the specific commit is installed
-
Known Issues
- GPTQ-LORA sometimes observed to have
nangrad norms in the begining of training, but training proceeds well otherwise.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file fms_acceleration_peft-0.3.5-py3-none-any.whl.
File metadata
- Download URL: fms_acceleration_peft-0.3.5-py3-none-any.whl
- Upload date:
- Size: 75.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.0.1 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e4360a2ae2b562ae8f3fc17a7ff5d7f7976841ebdfb4d0ea6ed1804ab4b48d58
|
|
| MD5 |
3ce873750c9c32cb0c8538b69d39f6f0
|
|
| BLAKE2b-256 |
2785b76fbaf512736126eccd26839274f69e858c5273d7db19aa7573d6a37f64
|
Provenance
The following attestation bundles were made for fms_acceleration_peft-0.3.5-py3-none-any.whl:
Publisher:
build-and-publish.yml on foundation-model-stack/fms-acceleration
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
fms_acceleration_peft-0.3.5-py3-none-any.whl -
Subject digest:
e4360a2ae2b562ae8f3fc17a7ff5d7f7976841ebdfb4d0ea6ed1804ab4b48d58 - Sigstore transparency entry: 158976614
- Sigstore integration time:
-
Permalink:
foundation-model-stack/fms-acceleration@c68759fd5e5fe8017d9cdee76acd52950aad41ae -
Branch / Tag:
refs/tags/v0.5.0 - Owner: https://github.com/foundation-model-stack
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
build-and-publish.yml@c68759fd5e5fe8017d9cdee76acd52950aad41ae -
Trigger Event:
release
-
Statement type: