Skip to main content

Intel extension for peft with PyTorch and DENAS

Project description

Deltatuner

Deltatuner is an extension for Peft to improve LLM fine-tuning speed through multiple optimizations, including leveraging the compact model constructor DE-NAS to construct/modify the compact delta layers in a hardware-aware and train-free approach and adding more new deltatuning algorithms.

Introduction

Key Components

  • Supported parameter efficient finetuning algorithms
    • LoRA algorithm: It freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks.
    • Scaling and Shifting(SSF) algorithm: Scale and Shift the deep features in a pre-trained model with much fewer parameters to catch up with the performance of full finetuning
  • De-Nas: Automatically construct compact and optimal delta layers with train-free and hardware-aware mode (more details here)
    • step1: Generate search space for delta layers
    • step2: Search algorithm populates delta layers for LM
    • step3: Train-free score evaluates LM with adaptive delta layers

Features

  • Easy-to-use: provide package install; just need to inject a few codes into the original code
  • Auto-tuning: automatically select the best algorithms and delta structure for finetuning model

Values

  • Saving computation power: reduce the computation power and time required to fine-tune a model by reducing parameter size and memory footprint.
  • Improve accuracy: ensure the same or no accuracy regression.

Get Started

Installation

  • install the python package
pip install e2eAIOK-deltatuner

Fast Fine-tuning on Base models

Below is an example of optimizing MPT model by adding the following few-lines to use the delatuner optimizations. It use the DE-NAS in delatuner to optimize a LLM with LoRA layers to a LLM with compact LoRA layers, so as to efficiently improve the LLM fine-tuning process in peak memory reduction and time speedup.

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import LoraConfig, get_peft_model

# import model from huggingface
model_id =  "mosaicml/mpt-7b"
model = AutoModelForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)

# adding the lora componenents with peft
config = LoraConfig()
lora_model = get_peft_model(model, config) 

# delatuner optimize the model with best lora layer configuration
+ import deltatuner
+ deltatuner_config = {"algo": "lora", "denas": True, "best_model_structure": f"/path/to/best_structure.jsonl"}
+ deltatuner_model = deltatuner.optimize(lora_model, **deltatuner_config)
...

API reference

In above examples, deltatuner.optimize is a python function to using deltatuner supported optimization algorithms to optimize the model.

def optimize(model, tokenizer, deltatuning_args=None, **kwargs) -> DeltaTunerModel:
    '''
    Parameters:
        - model - a HuggingFace PreTrainedModel or LoraModel. Specifies the model to be optimized
        - tokenizer - a corresponding tokenizer for text preprocessing, it is optional, and default set to None, if not specify the deltatuning_args.best_model_structure, then tokenizer cannot be ommitted
        - deltatuning_args (optional) – the deltatuner configuration. 
          - deltatuning_args.denas is to use the denas in the optimization (default: True)
          - deltatuning_args.algo Specifies what type of parameter efficient tuning algorithm (default: auto)
            - "auto" – If the input model is mpt, the algorithm is ssf; elif the algorithm is lora
            - "lora" – use the lora algotihm
            - "ssf" – use the ssf algotithm 
          - deltatuning_args.best_model_structure Specifies the pre-searched delta best structure so the model can be directly initialized without searching with DE-Nas algorithm.
        - kwargs - used to initialize deltatuning_args through key=value, such as algo="lora", "denas"=True
    Return 
        DeltaTunerModel - a wrapper of model, which composed of the original properties/function together with advanced properties/function provided by deltatuner
    '''

Detailed examples

Please refer to example page for more use cases on fine-tuning other LLMs with the help of DeltaTuner.

Model supported matrix

We have uploaded the searched delta best structure to the conf dir, so that users can directly use our searched structure for directly fine-tuning by passing the DeltaTunerArguments.best_model_structure to the deltatuner.optimize function.

Causal Language Modeling

Model LoRA SSF
GPT-2
GPT-J
Bloom
OPT
GPT-Neo
Falcon
LLaMA
MPT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

e2eAIOK-deltatuner-1.2.0.tar.gz (27.1 kB view details)

Uploaded Source

File details

Details for the file e2eAIOK-deltatuner-1.2.0.tar.gz.

File metadata

  • Download URL: e2eAIOK-deltatuner-1.2.0.tar.gz
  • Upload date:
  • Size: 27.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.4.2 requests/2.22.0 setuptools/45.2.0 requests-toolbelt/0.8.0 tqdm/4.30.0 CPython/3.8.10

File hashes

Hashes for e2eAIOK-deltatuner-1.2.0.tar.gz
Algorithm Hash digest
SHA256 1caf0e27073f5b6008e54597a35fcd0f2b17a9cc458cc7a0beb36a72b03c06f8
MD5 a29f9b3f0ca0706c18a060bd8cbd189e
BLAKE2b-256 fb88bd8484f20ccfc16743dfb20d82b61c3e043bf0a7d4c41d6bfd0f1ff763d6

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page