FMS Acceleration for PeFT
Project description
FMS Acceleration for Accelerated PeFT Techniques
Currently only supports LoRA-related techniques, but more are in the pipeline to be added:
Plugins
Plugin | Description | Depends | Loading | Augmentation | Callbacks |
---|---|---|---|---|---|
autogptq | Loads 4bit GPTQ-LoRA with quantized GPTQ as base | AutoGPTQ | ✅ | ✅ | |
bnb | Loads 4bit QLoRA with quantized bitsandbytes Linear4 | Huggingface bitsandbytes |
✅ | ✅ |
Key Points
- fix upcasting (resulting in slowdown) issue for
bnb
plugin, originally discovered by inventors of Unsloth. bnb
properly configured to work with FSDP following this guide.triton_v2
kernels are not yet properly integrated into huggingface optimum.triton_v2
kernels are the only 4bit kernels that work for training.
GPTQ-LORA's AutoGPTQ - Current Implementation vs Legacy Implementation
GPTQ-LORA depends on an AutoGPTQ backend to run. There are 2 backend options
- Current Implementation
- This is an extracted local subset from ModelCloud's refactored fork.
- It removes redundant code to simplify build and installation of the plugin
- Legacy Implementation
-
This requires building the package from the official AutoGPTQ repository
-
To replicate this implementation, follow the installation below
- The legacy implementation of GPTQ-LORA uses an external AutoGPTQ package, you must ensure the specific commit is installed
pip install git+https://github.com/AutoGPTQ/AutoGPTQ.git@ea829c7bbe83561c2b1de26795b6592992373ef7
- To construct the plugin, in the configuration object that is passed to the plugin - set
use_external_lib: True
(otherwise defaults to use the local AutoGPTQ package)
peft: quantization: auto_gptq: kernel: triton_v2 from_quantized: True use_external_lib: True
- The legacy implementation of GPTQ-LORA uses an external AutoGPTQ package, you must ensure the specific commit is installed
-
Known Issues
- Models with sliding windows (e.g., Mistral, Mixtral) will have memory and throughout issues.
- GPTQ-LORA sometimes observed to have
nan
grad norms in the begining of training, but training proceeds well otherwise. low_cpu_mem_usage
temporarily disabled for AutoGPTQ until bug withmake_sure_no_tensor_in_meta_device
is resolved.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
Close
Hashes for fms_acceleration_peft-0.2.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9ac6dc23e2d66b8d0728211642096561b17da59da66b8f73d5a386b6f7c359c5 |
|
MD5 | cbfc72b89c455dbcdf224cb4a31a72b8 |
|
BLAKE2b-256 | b0ed52d4269b0bbf23b929634345938c208988821344dac2aca60b87966884f1 |