Skip to main content

FMS Acceleration for PeFT

Project description

FMS Acceleration for Accelerated PeFT Techniques

Currently only supports LoRA-related techniques, but more are in the pipeline to be added:

Plugins

Plugin Description Depends Loading Augmentation Callbacks
autogptq Loads 4bit GPTQ-LoRA with quantized GPTQ as base AutoGPTQ
bnb Loads 4bit QLoRA with quantized bitsandbytes Linear4 Huggingface
bitsandbytes

Key Points

  • fix upcasting (resulting in slowdown) issue for bnb plugin, originally discovered by inventors of Unsloth. NOTE: we recommend using mixed precision when using 4bit quant for better performance, as per our benchmarks.
  • bnb properly configured to work with FSDP following this guide.
  • triton_v2 kernels are not yet properly integrated into huggingface optimum.
  • triton_v2 kernels are the only 4bit kernels that work for training.

GPTQ-LORA's AutoGPTQ - Current Implementation vs Legacy Implementation

GPTQ-LORA depends on an AutoGPTQ backend to run. There are 2 backend options

  1. Current Implementation
    • This is an extracted local subset from ModelCloud's refactored fork.
    • It removes redundant code to simplify build and installation of the plugin
  2. Legacy Implementation
    • This requires building the package from the official AutoGPTQ repository

    • To replicate this implementation, follow the installation below

      • The legacy implementation of GPTQ-LORA uses an external AutoGPTQ package, you must ensure the specific commit is installed
        pip install git+https://github.com/AutoGPTQ/AutoGPTQ.git@ea829c7bbe83561c2b1de26795b6592992373ef7
        
      • To construct the plugin, in the configuration object that is passed to the plugin - set use_external_lib: True (otherwise defaults to use the local AutoGPTQ package)
          peft:
          quantization: 
              auto_gptq:
              kernel: triton_v2
              from_quantized: True
              use_external_lib: True
      

Known Issues

  • GPTQ-LORA sometimes observed to have nan grad norms in the begining of training, but training proceeds well otherwise.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fms_acceleration_peft-0.4.2-py3-none-any.whl (76.6 kB view details)

Uploaded Python 3

File details

Details for the file fms_acceleration_peft-0.4.2-py3-none-any.whl.

File metadata

File hashes

Hashes for fms_acceleration_peft-0.4.2-py3-none-any.whl
Algorithm Hash digest
SHA256 2a9d6db8acd1b5c08252fe19d3e17876cda0d66135d0fb9ab9b805dc4aac5f99
MD5 6a9dfd17e5643726a7c3a3889d3dd2f8
BLAKE2b-256 670845a380195d5e40e1e50518a17d828a775aef4fdef8b13464bf22a0addf9b

See more details on using hashes here.

Provenance

The following attestation bundles were made for fms_acceleration_peft-0.4.2-py3-none-any.whl:

Publisher: build-and-publish.yml on foundation-model-stack/fms-acceleration

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page