Skip to main content

An efficient, flexible and full-featured toolkit for fine-tuning large models

Project description



GitHub Repo stars license PyPI Downloads issue resolution open issues

👋 join us on Static Badge Static Badge Static Badge

🔍 Explore our models on Static Badge Static Badge

English | 简体中文

🎉 News

  • [2024/01] Support InternLM2 models! The latest VLM LLaVA-Internlm2-7B / 20B models are released, with impressive performance!
  • [2024/01] Support DeepSeek-MoE models! 20GB GPU memory is enough for QLoRA fine-tuning, and 4x80GB for full-parameter fine-tuning. Click here for details!
  • [2023/12] 🔥 Support multi-modal VLM pretraining and fine-tuning with LLaVA-v1.5 architecture! Click here for details!
  • [2023/12] 🔥 Support Mixtral 8x7B models! Click here for details!
  • [2023/11] Support ChatGLM3-6B model!
  • [2023/10] Support MSAgent-Bench dataset, and the fine-tuned LLMs can be applied by Lagent!
  • [2023/10] Optimize the data processing to accommodate system context. More information can be found on Docs!
  • [2023/09] Support InternLM-20B models!
  • [2023/09] Support Baichuan2 models!
  • [2023/08] XTuner is released, with multiple fine-tuned adapters on HuggingFace.

📖 Introduction

XTuner is an efficient, flexible and full-featured toolkit for fine-tuning large models.

Efficient

  • Support LLM, VLM pre-training / fine-tuning on almost all GPUs. XTuner is capable of fine-tuning 7B LLM on a single 8GB GPU, as well as multi-node fine-tuning of models exceeding 70B.
  • Automatically dispatch high-performance operators such as FlashAttention and Triton kernels to increase training throughput.
  • Compatible with DeepSpeed 🚀, easily utilizing a variety of ZeRO optimization techniques.

Flexible

  • Support various LLMs (InternLM, Mixtral-8x7B, Llama2, ChatGLM, Qwen, Baichuan, ...).
  • Support VLM (LLaVA). The performance of LLaVA-InternLM2-20B is outstanding.
  • Well-designed data pipeline, accommodating datasets in any format, including but not limited to open-source and custom formats.
  • Support various training algorithms (QLoRA, LoRA, full-parameter fune-tune), allowing users to choose the most suitable solution for their requirements.

Full-featured

  • Support continuous pre-training, instruction fine-tuning, and agent fine-tuning.
  • Support chatting with large models with pre-defined templates.
  • The output models can seamlessly integrate with deployment and server toolkit (LMDeploy), and large-scale evaluation toolkit (OpenCompass, VLMEvalKit).

🌟 Demos

  • Ready-to-use models and datasets from XTuner API Open In Colab

  • QLoRA Fine-tune Open In Colab

  • Plugin-based Chat Open In Colab

    Examples of Plugin-based Chat 🔥🔥🔥

🔥 Supports

Models SFT Datasets Data Pipelines Algorithms

🛠️ Quick Start

Installation

  • It is recommended to build a Python-3.10 virtual environment using conda

    conda create --name xtuner-env python=3.10 -y
    conda activate xtuner-env
    
  • Install XTuner via pip

    pip install -U xtuner
    

    or with DeepSpeed integration

    pip install -U 'xtuner[deepspeed]'
    
  • Install XTuner from source

    git clone https://github.com/InternLM/xtuner.git
    cd xtuner
    pip install -e '.[all]'
    

Fine-tune Open In Colab

XTuner supports the efficient fine-tune (e.g., QLoRA) for LLMs. Dataset prepare guides can be found on dataset_prepare.md.

  • Step 0, prepare the config. XTuner provides many ready-to-use configs and we can view all configs by

    xtuner list-cfg
    

    Or, if the provided configs cannot meet the requirements, please copy the provided config to the specified directory and make specific modifications by

    xtuner copy-cfg ${CONFIG_NAME} ${SAVE_PATH}
    vi ${SAVE_PATH}/${CONFIG_NAME}_copy.py
    
  • Step 1, start fine-tuning.

    xtuner train ${CONFIG_NAME_OR_PATH}
    

    For example, we can start the QLoRA fine-tuning of InternLM2-Chat-7B with oasst1 dataset by

    # On a single GPU
    xtuner train internlm2_chat_7b_qlora_oasst1_e3 --deepspeed deepspeed_zero2
    # On multiple GPUs
    (DIST) NPROC_PER_NODE=${GPU_NUM} xtuner train internlm2_chat_7b_qlora_oasst1_e3 --deepspeed deepspeed_zero2
    (SLURM) srun ${SRUN_ARGS} xtuner train internlm2_chat_7b_qlora_oasst1_e3 --launcher slurm --deepspeed deepspeed_zero2
    
    • --deepspeed means using DeepSpeed 🚀 to optimize the training. XTuner comes with several integrated strategies including ZeRO-1, ZeRO-2, and ZeRO-3. If you wish to disable this feature, simply remove this argument.

    • For more examples, please see finetune.md.

  • Step 2, convert the saved PTH model (if using DeepSpeed, it will be a directory) to HuggingFace model, by

    xtuner convert pth_to_hf ${CONFIG_NAME_OR_PATH} ${PTH} ${SAVE_PATH}
    

Chat Open In Colab

XTuner provides tools to chat with pretrained / fine-tuned LLMs.

xtuner chat ${NAME_OR_PATH_TO_LLM} --adapter {NAME_OR_PATH_TO_ADAPTER} [optional arguments]

For example, we can start the chat with

InternLM2-Chat-7B with adapter trained from oasst1 dataset:

xtuner chat internlm/internlm2-chat-7b --adapter xtuner/internlm2-chat-7b-qlora-oasst1 --prompt-template internlm2_chat

LLaVA-InternLM2-7B:

xtuner chat internlm/internlm2-chat-7b --visual-encoder openai/clip-vit-large-patch14-336 --llava xtuner/llava-internlm2-7b --prompt-template internlm2_chat --image $IMAGE_PATH

For more examples, please see chat.md.

Deployment

  • Step 0, merge the HuggingFace adapter to pretrained LLM, by

    xtuner convert merge \
        ${NAME_OR_PATH_TO_LLM} \
        ${NAME_OR_PATH_TO_ADAPTER} \
        ${SAVE_PATH} \
        --max-shard-size 2GB
    
  • Step 1, deploy fine-tuned LLM with any other framework, such as LMDeploy 🚀.

    pip install lmdeploy
    python -m lmdeploy.pytorch.chat ${NAME_OR_PATH_TO_LLM} \
        --max_new_tokens 256 \
        --temperture 0.8 \
        --top_p 0.95 \
        --seed 0
    

    🔥 Seeking efficient inference with less GPU memory? Try 4-bit quantization from LMDeploy! For more details, see here.

Evaluation

  • We recommend using OpenCompass, a comprehensive and systematic LLM evaluation library, which currently supports 50+ datasets with about 300,000 questions.

🤝 Contributing

We appreciate all contributions to XTuner. Please refer to CONTRIBUTING.md for the contributing guideline.

🎖️ Acknowledgement

🖊️ Citation

@misc{2023xtuner,
    title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
    author={XTuner Contributors},
    howpublished = {\url{https://github.com/InternLM/xtuner}},
    year={2023}
}

License

This project is released under the Apache License 2.0. Please also adhere to the Licenses of models and datasets being used.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xtuner-0.1.13.tar.gz (188.1 kB view details)

Uploaded Source

Built Distribution

xtuner-0.1.13-py3-none-any.whl (1.1 MB view details)

Uploaded Python 3

File details

Details for the file xtuner-0.1.13.tar.gz.

File metadata

  • Download URL: xtuner-0.1.13.tar.gz
  • Upload date:
  • Size: 188.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.18

File hashes

Hashes for xtuner-0.1.13.tar.gz
Algorithm Hash digest
SHA256 1c5263945db3b1a9e9c4e2be9622f03b6eee80a7615a5b463bbab945d8c1e9dc
MD5 059c10c4de7cb8d7b66c1f91757301e3
BLAKE2b-256 8325d7a5d5de97db3d890267beb4b48d4e887e9ee9154d036c8ea629c6f27409

See more details on using hashes here.

File details

Details for the file xtuner-0.1.13-py3-none-any.whl.

File metadata

  • Download URL: xtuner-0.1.13-py3-none-any.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.18

File hashes

Hashes for xtuner-0.1.13-py3-none-any.whl
Algorithm Hash digest
SHA256 ea5ffdfdc6e833d2f717fb5b63be09c24feb52b22f86ef97b38d891895d172e7
MD5 71267d63acbd9b08e3326bfeb75ed68c
BLAKE2b-256 8f1c4c118c2b8c4b711a8271e55642d93d3e0eb206a29f7d7e825249f06240ad

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page