Skip to main content

Swift: Scalable lightWeight Infrastructure for Fine-Tuning

Project description

SWIFT(Scalable lightWeight Infrastructure for Fine-Tuning)



Modelscope Hub
中文  |  English

Introduction

SWIFT (Scalable lightWeight Infrastructure for Fine-Tuning) is an extensible framwork designed to faciliate lightweight model fine-tuning and inference. It integrates implementations for various efficient fine-tuning methods, by embracing approaches that is parameter-efficient, memory-efficient, and time-efficient. SWIFT integrates seamlessly into ModelScope ecosystem and offers the capabilities to finetune various models, with a primary emphasis on LLMs and vision models. Additionally, SWIFT is fully compatible with PEFT, enabling users to leverage the familiar Peft interface to finetune ModelScope models.

Currently supported approches (and counting):

  1. LoRA: LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS
  2. Adapter: Parameter-Efficient Transfer Learning for NLP
  3. Prompt Tuning: Visual Prompt Tuning
  4. Side: Side-Tuning: A Baseline for Network Adaptation via Additive Side Networks
  5. ResTuning-Bypass
  6. All tuners offered on PEFT

Key features:

  1. By integrating the ModelScope library, models can be readily obatined via a model-id.
  2. Tuners provided by SWIFT can be combined together to allow exploration of multiple tuners on a model for best result.
  3. Support calling activate_adapter or deactivate_adapter or set_active_adapters to activate/deactivate tuners. User can inference with one model and multiple tuners in different threads independently.

Users can check the documentation of Swift to get detail tutorials.

LLM SFT Example

code link

  1. supported SFT methods: LoRA, QLoRA, full(full parameter fine-tuning)
  2. supported models:
    1. qwen series: qwen-7b, qwen-7b-chat
    2. qwen-vl series: qwen-vl, qwen-vl-chat
    3. baichuan series: baichuan-7b, baichuan-13b, baichuan-13b-chat, baichuan2-7b, baichuan2-7b-chat, baichuan2-13b, baichuan2-13b-chat
    4. chatglm2 series: chatglm2-6b, chatglm2-6b-32k
    5. llama series: llama2-7b, llama2-7b-chat, llama2-13b, llama2-13b-chat, llama2-70b, llama2-70b-chat
    6. openbuddy-llama series: openbuddy-llama2-13b, openbuddy-llama-65b, openbuddy-llama2-70b
    7. internlm series: internlm-7b, internlm-7b-chat, internlm-7b-chat-8k, internlm-20b, internlm-20b-chat
    8. other: polylm-13b, seqgpt-560m
  3. supported features: quantization, DDP, model parallelism(device map), gradient checkpointing, gradient accumulation, pushing to modelscope hub, custom datasets, multimodal and agent SFT, mutli-round chat, ...
  4. supported datasets:
    1. NLP: alpaca-en(gpt4), alpaca-zh(gpt4), finance-en, multi-alpaca-all, code-en, instinwild-en, instinwild-zh, cot-en, cot-zh, firefly-all-zh, poetry-zh, instruct-en, gpt4all-en, cmnli-zh, jd-zh, dureader-robust-zh, medical-en, medical-zh, medical-mini-zh, sharegpt-en, sharegpt-zh, code-python-zh, advertise-gen
    2. agent: damo-agent-zh, damo-agent-mini-zh
    3. multi-modal: coco-en
    4. other: cls-fudan-news-zh, ner-jave-zh
  5. supported templates: chatml(qwen), baichuan, chatglm2, llama, openbuddy-llama, default, default-generation

Installation

SWIFT is running in Python environment. Please make sure your python version is higher than 3.8.

  • Install SWIFT by the pip command:
pip install ms-swift -U
  • Install SWIFT by source code(for running sft/infer examples), please run:
git clone https://github.com/modelscope/swift.git
cd swift
pip install -e .

SWIFT requires torch>=1.13.

  • Use SWIFT in our docker image:
docker pull registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.8.0-py38-torch2.0.1-tf2.13.0-1.9.1

Getting Started

SWIFT supports multiple tuners, as well as tuners provided by PEFT. To use these tuners, simply call:

from swift import Swift, LoRAConfig
config = LoRAConfig(...)
model = Swift.prepare_model(model, config, extra_state_keys=['...'])

The code snippet above initialized the tuner randomly. The input model is an instance of torch.nn.Module, the config is a subclass instance of SwiftConfig or PeftConfig. extra_state_keys is the extra module weights(like the linear head) to be trained and stored in the output dir.

You may combine multiple tuners by:

from swift import Swift, LoRAConfig, PromptConfig
model = Swift.prepare_model(model, {'lora': LoRAConfig(...), 'prompt': PromptConfig(...)})

Call save_pretrained and push_to_hub after finetuning:

from swift import push_to_hub
model.save_pretrained('some-output-folder')
push_to_hub('my-group/some-repo-id-modelscope', 'some-output-folder', token='some-ms-token')

Assume my-group/some-repo-id-modelscope is the model-id in the hub, and some-ms-token is the token for uploading.

Using the model-id to do later inference:

from swift import Swift
model = Swift.from_pretrained(model, 'my-group/some-repo-id-modelscope')

Here shows a runnable example:

import os
import tempfile

# Please install modelscope by `pip install modelscope`
from modelscope import Model

from swift import LoRAConfig, SwiftModel, Swift, push_to_hub

tmp_dir = tempfile.TemporaryDirectory().name
if not os.path.exists(tmp_dir):
    os.makedirs(tmp_dir)


model = Model.from_pretrained('modelscope/Llama-2-7b-ms', device_map='auto')
lora_config = LoRAConfig(target_modules=['q_proj', 'k_proj', 'v_proj'])
model: SwiftModel = Swift.prepare_model(model, lora_config)
# Do some finetuning here
model.save_pretrained(tmp_dir)

push_to_hub('my-group/swift_llama2', output_dir=tmp_dir)
model = Model.from_pretrained('modelscope/Llama-2-7b-ms', device_map='auto')
model = SwiftModel.from_pretrained(model, 'my-group/swift_llama2', device_map='auto')

This is a example that uses transformers for model creation uses SWIFT for efficient tuning.

from swift import Swift, LoRAConfig, AdapterConfig, PromptConfig
from transformers import AutoModelForImageClassification

# init vit model
model = AutoModelForImageClassification.from_pretrained("google/vit-base-patch16-224")

# init lora tuner config
lora_config = LoRAConfig(
    r=10,  # the rank of the LoRA module
    target_modules=['query', 'key', 'value'],  # the modules to be replaced with the end of the module name
    merge_weights=False  # whether to merge weights
)

# init adapter tuner config
adapter_config = AdapterConfig(
    dim=768,  # the dimension of the hidden states
    hidden_pos=0,  # the position of the hidden state to passed into the adapter
    target_modules=r'.*attention.output.dense$',  # the modules to be replaced with regular expression
    adapter_length=10  # the length of the adapter length
)

# init prompt tuner config
prompt_config = PromptConfig(
    dim=768,  # the dimension of the hidden states
    target_modules=r'.*layer\.\d+$',  # the modules to be replaced with regular expression
    embedding_pos=0,    # the position of the embedding tensor
    prompt_length=10,   # the length of the prompt tokens
    attach_front=False  # Whether prompt is attached in front of the embedding
)

# create model with swift. In practice, you can use any of these tuners or a combination of them.
model = Swift.prepare_model(model, {"lora_tuner": lora_config, "adapter_tuner": adapter_config, "prompt_tuner": prompt_config})

# get the trainable parameters of model
model.get_trainable_parameters()
# 'trainable params: 838,776 || all params: 87,406,432 || trainable%: 0.9596273189597764'

You can use the features offered by Peft in SWIFT:

from swift import LoraConfig, Swift
from peft import TaskType
lora_config = LoraConfig(target_modules=['query', 'key', 'value'], task_type=TaskType.CAUSAL_LM)
model_wrapped = Swift.prepare_model(model, lora_config)

# or call from_pretrained to load weights in the modelhub
model_wrapped = Swift.from_pretrained(model, 'some-id-in-the-modelscope-modelhub')

The saving strategy between Swift tuners and Peft tuners are slightly different. You can name a tuner by:

model = Swift.prepare_model(model, {'default': LoRAConfig(...)})
model.save_pretrained('./output')

In the output dir, you will have a dir structure like this:

output
    |-- default
        |-- adapter_config.json
        |-- adapter_model.bin
    |-- adapter_config.json
    |-- adapter_model.bin

The config/weights stored in the output dir is the config of extra_state_keys and the weights of it. This is different from PEFT, which stores the weights and config of the default tuner.

Learn More

License

This project is licensed under the Apache License (Version 2.0).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ms-swift-1.1.0.tar.gz (63.6 kB view details)

Uploaded Source

Built Distribution

ms_swift-1.1.0-py3-none-any.whl (82.2 kB view details)

Uploaded Python 3

File details

Details for the file ms-swift-1.1.0.tar.gz.

File metadata

  • Download URL: ms-swift-1.1.0.tar.gz
  • Upload date:
  • Size: 63.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.7.17

File hashes

Hashes for ms-swift-1.1.0.tar.gz
Algorithm Hash digest
SHA256 478aa617b839f8a079b07074e8ece3c16a059143c08b0199d370e77b79c8c759
MD5 e03ea9557aecdb2b15f8476c7d6279d4
BLAKE2b-256 8ffc104884c8c096e603937364751baeaffe4d4a327d141bb88fa4a57ab41c70

See more details on using hashes here.

File details

Details for the file ms_swift-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: ms_swift-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 82.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.7.17

File hashes

Hashes for ms_swift-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ed108a66fc25e05a4fd8fd71669140ec16a33d27d953e0b62ee91fc82c0fe20c
MD5 a60ebc375579d20118a9441f3c9111e7
BLAKE2b-256 4eea625a61ec5d3006b6f1d6c4ecdbc1cb6c28d16c4e1234edaa46d7d63406f8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page