Skip to main content

A tool for fine-tuning large language models (LLMs) using the LoRA or QLoRA methods more efficiently.

Project description

m-LoRA: Efficient LLM Model Fine-Tune via Multi-LoRA Optimization

m-LoRA (a.k.a Multi-Lora Fine-Tune) is an open-source framework for fine-tuning Large Language Models (LLMs) using the efficient multiple LoRA/QLoRA methods. Key features of m-LoRA include:

  • Efficient LoRA/QLoRA: Optimizes the fine-tuning process, significantly reducing GPU memory usage by leveraging a shared frozen-based model.

  • Multiple LoRA Adapters: Support for concurrent fine-tuning of multiple LoRA/QLoRA adapters.

Contents

Updates

  • Support multiple LLaMA2 fine-tuning
  • Support multiple ChatGLM fine-tuning
  • Support multiple LLaMA fine-tuning
  • On the way, Baichuan

Models

Model Model size
ChatGLM 6B
ChatGLM2 6B/12B
ChatGLM3 6B
LLaMA 7B//13B/33B/65B
LLaMA-2 7B/13B/70B
Baichuan 7B/13B
Baichuan2 7B/13B

Example: Use our system to improve the LLaMa-2 fine-tuning with less resources https://www.kaggle.com/code/rraydata/multi-lora-example/notebook

Overview

m-LoRA is a high-throughput LLM fine-tuning framework based on LoRA and QLoRA, compatible with HuggingFace-Transformers LLaMA Models and ChatGLM Models.

This picture shows the basic principle of LoRA and Multi-LoRA.

The system overview of m-LoRA is as follows.

m-LoRA requires PyTorch and NVIDIA CUDA compatible GPUs.

Main Contribution

  • Introduces the Multi-LoRA method, capable of enabling the sharing of pre-trained model weights during the fine-tuning process of large language models;
  • Proposes a task scheduling algorithm to enhance the overall throughput of the task training process and reduce total training latency;
  • Builds upon the above by implementing m-LoRA, a high-throughput large language model fine-tuning framework based on LoRA and QLoRA;
  • Evaluates m-LoRA in experiments against existing systems, confirming that m-LoRA effectively utilizes system computing resources, thereby improving training throughput and reducing training latency compared to current systems.

Experiment Results

Environment: NVIDIA RTX A6000 with Intel Xeon Silver 4314 on Ubuntu 22.04.3

Baseline: We utilized the widely adopted Alpaca-LoRA as a foundation. On a single GPU, we independently ran multiple Alpaca-LoRA processes in parallel (marked as Baseline@Alpaca-Parallel) and sequentially (marked as Baseline@Alpaca-Seq), forming two baseline methods for the experiments. We test this on A100, and rest of results are based on the same GPU configure.

Training Latency and Throughput

Method Latency Throughput
Baseline@Alpaca-Seq 10.51h 608.41 token/s
Baseline@Alpaca-Parallel 9.85h 649.30 token/s
m-LoRA 9.46h 674.58 token/s

We conducted four identical fine-tuning jobs with same dataset and same hyper-parameters, incorporating two baselines and m-LoRA. During the experimental process, we collected the completion times for each task in the baseline methods and calculated the time taken by the slowest task as the Training Latency. As shown in Table, m-LoRA exhibits lower Training Latency compared to both baseline methods. Specifically, m-LoRA is 9.99% faster than Baseline@Alpaca-Seq and 3.92% faster than Baseline@Alpaca-Parallel.

Video Memory Usage

We conducted several fine-tuning jobs with same dataset and batch_size = {2,4, 6, 8}, incorporating Baseline@Alpaca-Parallel and m-LoRA.

Baseline@Alpaca-Parallel triggered OOM error after 3 parallel tasks when batch size = 8, while m-LoRA can handle twice that amount.

Batching Strategies

Method Training Latency Peak Memory Usage Average GPU Utilization Training Throughput
Baseline@Alpaca-Seq 27.73h 10.68GB 79.39% 653.35 token/s
m-LoRA@M1 36.82h 23.82GB 96.52% 672.54 token/s
m-LoRA@M2 39.14h 23.86GB 96.41% 671.28 token/s
m-LoRA@M3 22.97h 23.85GB 95.22% 674.41 token/s

We conducted four fine-tuning jobs with different dataset but same hyper-parameters, incorporating Baseline@Alpaca-Seq and m-LoRA.

During the experimental process, we collected following metrics:

  • Training Latency = Job completion time
  • Throughput = The number of passed tokens in model forward process / training latency
  • Memory Usage = Peak video memory usage
  • GPU Utilization = Average GPU utilization

All metrics are computed for each job. M1, M2, M3 represent three batch strategies of m-LoRA: Optimal-Fit, Trivial, and Fast-Fit. BASELINE denotes Baseline@Alpaca-Seq.

The Optimal-Fit strategy performs the best across all four metrics, while the other two strategies also outperform the baseline method other than training latency.

Use Cases:

  • Domain-Specific Fine-Tuning: This involves adapting a single model with various parameters particularly for one domain.
  • Cross-Domain Fine-Tuning: This method leverages the base model to fine-tune multiple models, each intended for a different domain.

Quickstart

Firstly, you should clone this repository and install dependencies:

# Clone Repository
git clone https://github.com/TUDB-Labs/multi-lora-fine-tune
cd multi-lora-fine-tune
# Install requirements
pip install -r requirements.txt

The mlora.py code is a starting point for finetuning on various datasets. Basic command for finetuning a baseline model on the Alpaca Cleaned dataset:

python mlora.py \
  --base_model yahma/llama-7b-hf \
  --config ./config/alpaca.json \
  --load_8bit

You can check the template finetune configuration in template folder.

For further detailed usage information, please use --help option:

python mlora.py --help

Demo on Colab

You can run finetune on Colab by following this example: Google Colab Example. Make sure to switch the runtime environment to GPU before running it.

Installation

You can also install m-LoRA into your environment:

# Optional but recommended
conda create -n mlora_env python=3.8
conda activate mlora_env
# Install requirements
pip install -r requirements.txt

After installation, you can use m-LoRA directly in your code:

import mlora

Contributing

We welcome contributions to improve this repository! Please review the contribution guidelines before submitting pull requests or issues.

Fork the repository. Create a new branch for your feature or fix. Submit a pull request with a detailed explanation of your changes.

Citation

Please cite the repo if you use the code in this repo.

@misc{m-LoRA,
  author = {Zhengmao, Ye\textsuperscript{*} and Dengchun, Li\textsuperscript{*} and Jingqi, Tian and Tingfeng, Lan and Yanbo, Liang and Yexi, Jiang and Jie, Zuo and Hui, Lu and Lei, Duan and Mingjie, Tang},
  title = {m-LoRA: Efficient LLM Model Fine-tune and Inference via Multi-Lora Optimization},
  year = {2023},
  publisher = {GitHub},
  howpublished = {\url{https://github.com/TUDB-Labs/multi-lora-fine-tune}},
  note={\textsuperscript{*}: these authors contributed equally to this work.}
}

Copyright

Copyright © 2023 All Rights Reserved.

This project is licensed under the Apache 2.0 License.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

     http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mlora-0.1.tar.gz (26.0 kB view details)

Uploaded Source

Built Distribution

mlora-0.1-py3-none-any.whl (26.4 kB view details)

Uploaded Python 3

File details

Details for the file mlora-0.1.tar.gz.

File metadata

  • Download URL: mlora-0.1.tar.gz
  • Upload date:
  • Size: 26.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for mlora-0.1.tar.gz
Algorithm Hash digest
SHA256 5f8f99918e10a4ca49e14dbedf697f8692c45e6f6927f8474bc7daf5384a5deb
MD5 b3087a4b1a09c9f86c14ab7cfaacf64d
BLAKE2b-256 a13782d90db2dd134b36fe5d7ac198a90bd63f5c1b953ae38c38f2e9b02762c5

See more details on using hashes here.

File details

Details for the file mlora-0.1-py3-none-any.whl.

File metadata

  • Download URL: mlora-0.1-py3-none-any.whl
  • Upload date:
  • Size: 26.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for mlora-0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 01e3956fc15235ace9a544a0c787102550d4e70eb97a06d2ebbc06a25fcea1ab
MD5 6174cfae0eca5bbb79544bc3a2282bab
BLAKE2b-256 708a5f30f27c1dfc6beb33215aa1f11e7adbab92ce1901d847488f849817ac86

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page