FMS Acceleration for PeFT
Project description
FMS Acceleration for Accelerated PeFT Techniques
Currently only supports LoRA-related techniques, but more are in the pipeline to be added:
Plugins
Plugin | Description | Depends | Loading | Augmentation | Callbacks |
---|---|---|---|---|---|
autogptq | Loads 4bit GPTQ-LoRA with quantized GPTQ as base | AutoGPTQ | ✅ | ✅ | |
bnb | Loads 4bit QLoRA with quantized bitsandbytes Linear4 | Huggingface bitsandbytes |
✅ | ✅ |
Key Points
- fix upcasting (resulting in slowdown) issue for
bnb
plugin, originally discovered by inventors of Unsloth. bnb
properly configured to work with FSDP following this guide.triton_v2
kernels are not yet properly integrated into huggingface optimum.triton_v2
kernels are the only 4bit kernels that work for training.
GPTQ-LORA's AutoGPTQ - Current Implementation vs Legacy Implementation
GPTQ-LORA depends on an AutoGPTQ backend to run. There are 2 backend options
- Current Implementation
- This is an extracted local subset from ModelCloud's refactored fork.
- It removes redundant code to simplify build and installation of the plugin
- Legacy Implementation
-
This requires building the package from the official AutoGPTQ repository
-
To replicate this implementation, follow the installation below
- The legacy implementation of GPTQ-LORA uses an external AutoGPTQ package, you must ensure the specific commit is installed
pip install git+https://github.com/AutoGPTQ/AutoGPTQ.git@ea829c7bbe83561c2b1de26795b6592992373ef7
- To construct the plugin, in the configuration object that is passed to the plugin - set
use_external_lib: True
(otherwise defaults to use the local AutoGPTQ package)
peft: quantization: auto_gptq: kernel: triton_v2 from_quantized: True use_external_lib: True
- The legacy implementation of GPTQ-LORA uses an external AutoGPTQ package, you must ensure the specific commit is installed
-
Known Issues
- Models with sliding windows (e.g., Mistral, Mixtral) will have memory and throughout issues.
- GPTQ-LORA sometimes observed to have
nan
grad norms in the begining of training, but training proceeds well otherwise. low_cpu_mem_usage
temporarily disabled for AutoGPTQ until bug withmake_sure_no_tensor_in_meta_device
is resolved.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
Close
Hashes for fms_acceleration_peft-0.3.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9d73395de29802b85d515235512018e7ecaea504e40a954b50a0c8246bc524a2 |
|
MD5 | 5b1c2eb0e70611c1e3276a05edd4f426 |
|
BLAKE2b-256 | 830d928cf5122598a51090826a0bbd0a0109116b60e3742496d6e2f664ac2cdb |