Skip to main content

A toolkit for efficiently fine-tuning LLM

Project description



license PyPI Generic badge

English | 简体中文

👋 join us on Twitter, Discord and WeChat

🎉 News

  • [2023/10] Support MSAgent-Bench dataset, and the fine-tuned LLMs can be applied by Lagent!
  • [2023/10] Optimize the data processing to accommodate system context. More information can be found on Docs!
  • [2023/09] Support InternLM-20B models!
  • [2023/09] Support Baichuan2 models!
  • [2023/08] XTuner is released, with multiple fine-tuned adapters on HuggingFace.

📖 Introduction

XTuner is a toolkit for efficiently fine-tuning LLM, developed by the MMRazor and MMDeploy teams.

  • Efficiency: Support LLM fine-tuning on consumer-grade GPUs. The minimum GPU memory required for 7B LLM fine-tuning is only 8GB, indicating that users can use nearly any GPU (even the free resource, e.g., Colab) to fine-tune custom LLMs.
  • Versatile: Support various LLMs (InternLM, Llama2, ChatGLM2, Qwen, Baichuan2, ...), datasets (MOSS_003_SFT, Alpaca, WizardLM, oasst1, Open-Platypus, Code Alpaca, Colorist, ...) and algorithms (QLoRA, LoRA), allowing users to choose the most suitable solution for their requirements.
  • Compatibility: Compatible with DeepSpeed 🚀 and HuggingFace 🤗 training pipeline, enabling effortless integration and utilization.

🌟 Demos

  • Ready-to-use models and datasets from XTuner API Open In Colab

  • QLoRA Fine-tune Open In Colab

  • Plugin-based Chat Open In Colab

    Examples of Plugin-based Chat 🔥🔥🔥

🔥 Supports

Models SFT Datasets Data Pipelines Algorithms

🛠️ Quick Start

Installation

  • It is recommended to build a Python-3.10 virtual environment using conda

    conda create --name xtuner-env python=3.10 -y
    conda activate xtuner-env
    
  • Install XTuner via pip

    pip install xtuner
    

    or with DeepSpeed integration

    pip install 'xtuner[deepspeed]'
    
  • Install XTuner from source

    git clone https://github.com/InternLM/xtuner.git
    cd xtuner
    pip install -e '.[all]'
    

Fine-tune Open In Colab

XTuner supports the efficient fine-tune (e.g., QLoRA) for LLMs. Dataset prepare guides can be found on dataset_prepare.md.

  • Step 0, prepare the config. XTuner provides many ready-to-use configs and we can view all configs by

    xtuner list-cfg
    

    Or, if the provided configs cannot meet the requirements, please copy the provided config to the specified directory and make specific modifications by

    xtuner copy-cfg ${CONFIG_NAME} ${SAVE_PATH}
    
  • Step 1, start fine-tuning.

    xtuner train ${CONFIG_NAME_OR_PATH}
    

    For example, we can start the QLoRA fine-tuning of InternLM-7B with oasst1 dataset by

    # On a single GPU
    xtuner train internlm_7b_qlora_oasst1_e3
    # On multiple GPUs
    (DIST) NPROC_PER_NODE=${GPU_NUM} xtuner train internlm_7b_qlora_oasst1_e3
    (SLURM) srun ${SRUN_ARGS} xtuner train internlm_7b_qlora_oasst1_e3 --launcher slurm
    

    For more examples, please see finetune.md.

  • Step 2, convert the saved PTH model (if using DeepSpeed, it will be a directory) to HuggingFace model, by

    xtuner convert pth_to_hf ${CONFIG_NAME_OR_PATH} ${PTH} ${SAVE_PATH}
    

Chat Open In Colab

XTuner provides tools to chat with pretrained / fine-tuned LLMs.

xtuner chat ${NAME_OR_PATH_TO_LLM} --adapter {NAME_OR_PATH_TO_ADAPTER} [optional arguments]

For example, we can start the chat with

InternLM-7B with adapter trained from Alpaca-enzh:

xtuner chat internlm/internlm-7b --adapter xtuner/internlm-7b-qlora-alpaca-enzh --prompt-template internlm_chat --system-template alpaca

Llama2-7b with adapter trained from MOSS-003-SFT:

xtuner chat meta-llama/Llama-2-7b-hf --adapter xtuner/Llama-2-7b-qlora-moss-003-sft --bot-name Llama2 --prompt-template moss_sft --system-template moss_sft --with-plugins calculate solve search --command-stop-word "<eoc>" --answer-stop-word "<eom>" --no-streamer

For more examples, please see chat.md.

Deployment

  • Step 0, merge the HuggingFace adapter to pretrained LLM, by

    xtuner convert merge \
        ${NAME_OR_PATH_TO_LLM} \
        ${NAME_OR_PATH_TO_ADAPTER} \
        ${SAVE_PATH} \
        --max-shard-size 2GB
    
  • Step 1, deploy fine-tuned LLM with any other framework, such as LMDeploy 🚀.

    pip install lmdeploy
    python -m lmdeploy.pytorch.chat ${NAME_OR_PATH_TO_LLM} \
        --max_new_tokens 256 \
        --temperture 0.8 \
        --top_p 0.95 \
        --seed 0
    

    🔥 Seeking efficient inference with less GPU memory? Try 4-bit quantization from LMDeploy! For more details, see here.

    🎯 We are woking closely with LMDeploy, to implement the deployment of plugin-based chat!

Evaluation

  • We recommend using OpenCompass, a comprehensive and systematic LLM evaluation library, which currently supports 50+ datasets with about 300,000 questions.

🤝 Contributing

We appreciate all contributions to XTuner. Please refer to CONTRIBUTING.md for the contributing guideline.

🎖️ Acknowledgement

License

This project is released under the Apache License 2.0. Please also adhere to the Licenses of models and datasets being used.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xtuner-0.1.5.tar.gz (136.3 kB view details)

Uploaded Source

Built Distribution

xtuner-0.1.5-py3-none-any.whl (744.5 kB view details)

Uploaded Python 3

File details

Details for the file xtuner-0.1.5.tar.gz.

File metadata

  • Download URL: xtuner-0.1.5.tar.gz
  • Upload date:
  • Size: 136.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.18

File hashes

Hashes for xtuner-0.1.5.tar.gz
Algorithm Hash digest
SHA256 20edf67fdbef145b87a1c10924de6bcb753fa91784e6ce4daabf39bc28025601
MD5 942102cabb6f27c624eb83f1e97b3c47
BLAKE2b-256 cf68fec5efa585833f6e5999e0b016726da88d08377ba6b473b195261c8d265e

See more details on using hashes here.

File details

Details for the file xtuner-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: xtuner-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 744.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.18

File hashes

Hashes for xtuner-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 6e2cfb63d2586e8bd3ca7f4159bc999bb9d98156dbf65575b6917e66593945d1
MD5 83c53d227f0a8326079d81a5f758ba0b
BLAKE2b-256 3657f8c7ccdc068ccb835d28139e91f71bd3ac230e7e74c8a5d5f82f6b3de0c1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page