Skip to main content

FMS Acceleration for PeFT

Project description

FMS Acceleration for Accelerated PeFT Techniques

Currently only supports LoRA-related techniques, but more are in the pipeline to be added:

Plugins

Plugin Description Depends Loading Augmentation Callbacks
autogptq Loads 4bit GPTQ-LoRA with quantized GPTQ as base AutoGPTQ
bnb Loads 4bit QLoRA with quantized bitsandbytes Linear4 Huggingface
bitsandbytes

Key Points

  • fix upcasting (resulting in slowdown) issue for bnb plugin, originally discovered by inventors of Unsloth. NOTE: we recommend using mixed precision when using 4bit quant for better performance, as per our benchmarks.
  • bnb properly configured to work with FSDP following this guide.
  • triton_v2 kernels are not yet properly integrated into huggingface optimum.
  • triton_v2 kernels are the only 4bit kernels that work for training.

GPTQ-LORA's AutoGPTQ - Current Implementation vs Legacy Implementation

GPTQ-LORA depends on an AutoGPTQ backend to run. There are 2 backend options

  1. Current Implementation
    • This is an extracted local subset from ModelCloud's refactored fork.
    • It removes redundant code to simplify build and installation of the plugin
  2. Legacy Implementation
    • This requires building the package from the official AutoGPTQ repository

    • To replicate this implementation, follow the installation below

      • The legacy implementation of GPTQ-LORA uses an external AutoGPTQ package, you must ensure the specific commit is installed
        pip install git+https://github.com/AutoGPTQ/AutoGPTQ.git@ea829c7bbe83561c2b1de26795b6592992373ef7
        
      • To construct the plugin, in the configuration object that is passed to the plugin - set use_external_lib: True (otherwise defaults to use the local AutoGPTQ package)
          peft:
          quantization: 
              auto_gptq:
              kernel: triton_v2
              from_quantized: True
              use_external_lib: True
      

Known Issues

  • GPTQ-LORA sometimes observed to have nan grad norms in the begining of training, but training proceeds well otherwise.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

fms_acceleration_peft-0.3.4-py3-none-any.whl (75.2 kB view details)

Uploaded Python 3

File details

Details for the file fms_acceleration_peft-0.3.4-py3-none-any.whl.

File metadata

File hashes

Hashes for fms_acceleration_peft-0.3.4-py3-none-any.whl
Algorithm Hash digest
SHA256 3f086c22219451801ab0a6dda4f892b13d5db07fa4f07630bd4c9ebaf3d1d13e
MD5 76d0daf6d00b06ca70c3a6154fd84d1b
BLAKE2b-256 d0410573eda43a748bbec8d5ddb9cf7910fc4fc5b20eeba2f2dacae94f941a9a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page