FMS Acceleration for PeFT
Project description
FMS Acceleration for Accelerated PeFT Techniques
Currently only supports LoRA-related techniques, but more are in the pipeline to be added:
Plugins
Plugin | Description | Depends | Loading | Augmentation | Callbacks |
---|---|---|---|---|---|
autogptq | Loads 4bit GPTQ-LoRA with quantized GPTQ as base | AutoGPTQ | ✅ | ✅ | |
bnb | Loads 4bit QLoRA with quantized bitsandbytes Linear4 | Huggingface bitsandbytes |
✅ | ✅ |
Key Points
- fix upcasting (resulting in slowdown) issue for
bnb
plugin, originally discovered by inventors of Unsloth. bnb
properly configured to work with FSDP following this guide.triton_v2
kernels are not yet properly integrated into huggingface optimum.triton_v2
kernels are the only 4bit kernels that work for training.
Known Issues
- Models with sliding windows (e.g., Mistral, Mixtral) will have memory and throughout issues.
- GPTQ-LORA sometimes observed to have
nan
grad norms in the begining of training, but training proceeds well otherwise. low_cpu_mem_usage
temporarily disabled for AutoGPTQ until bug withmake_sure_no_tensor_in_meta_device
is resolved.- Requires nightly AutoGPTQ until package
> 0.7.1
becomes availablepip install git+https://github.com/AutoGPTQ/AutoGPTQ.git
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
Close
Hashes for fms_acceleration_peft-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7a16ff7b47a0bb4de15dfa74a1b2beaba7f3579132754c9194aebc708a15f6b3 |
|
MD5 | 88887a80206dd85ec2df2fe6f1785cf2 |
|
BLAKE2b-256 | 1dcefa294db6e00a72a13606748899c5db39a1995c0a025470899aa6f1b7153b |