Skip to main content

TILEARN for LLM

Project description

Dependencies

请安装https://github.com/Dao-AILab/flash-attention

Tilearn.llm使用说明

CUDA Kernel(以LLAMA为例)

新版本

该版本完全兼容huggingface接口,不需要额外的转模型操作,端到端加速约20%

启动脚本

### TIACC CUDA Kernel
### Open: TIACC_TRAINING_CUDA_KERNEL=1
### Close: TIACC_TRAINING_CUDA_KERNEL=0
export TIACC_TRAINING_CUDA_KERNEL=1

代码

### TIACC
TIACC_TRAINING_CUDA_KERNEL = int(os.getenv('TIACC_TRAINING_CUDA_KERNEL', '0'))
if TIACC_TRAINING_CUDA_KERNEL == 1:
    from tilearn.llm.transformers import LlamaForCausalLM

### 模型接口与标准huggingface一致
model = LlamaForCausalLM.from_pretrained(...)

旧版本

该版本不兼容huggingface接口,端到端加速约30%。 可直接读取huggingface模型和原始cuda kernel模型(训练保存的模型结构)

# 若读取huggingface模型结构,则设置llama-hf
export TIACC_TRAINING_MODEL_FORMAT=llama-hf
# 若原始cuda kernel模型,则设置llama-origin
export TIACC_TRAINING_MODEL_FORMAT=llama-origin

由于训练保存的模型为原始cuda kernel模型,非huggingface结构,若需要huggingface模型则手动执行脚本转换

cuda kernel使用方法-启动脚本

### TIACC CUDA Kernel
### Open: TIACC_TRAINING_CUDA_KERNEL_V0=1
### Close: TIACC_TRAINING_CUDA_KERNEL_V0=0
export TIACC_TRAINING_CUDA_KERNEL_V0=1
export TIACC_TRAINING_MODEL_FORMAT=llama-hf

代码

### TIACC
TIACC_TRAINING_CUDA_KERNEL_V0 = int(os.getenv('TIACC_TRAINING_CUDA_KERNEL_V0', '0'))
if TIACC_TRAINING_CUDA_KERNEL_V0 == 1:
    from tilearn import llm

### LLAMA模型初始化
TIACC_TRAINING_MODEL_FORMAT = os.getenv('TIACC_TRAINING_MODEL_FORMAT', 'llama-origin')
model = llm.models.llama(model_args.model_name_or_path, model_format=TIACC_TRAINING_MODEL_FORMAT)

Static Zero

启动脚本

### TIACC STATIC ZERO
### Open: TIACC_TRAINING_CUDA_KERNEL='O2' 
### support 'O2' / 'O2.5' / 'O3' / 'O3.5' / 'O3_Q8'(doing)
### Close: TIACC_TRAINING_CUDA_KERNEL='None'
export TIACC_TRAINING_STATIC_ZERO='None' #'O2'

代码

from transformers import HfArgumentParser

TIACC_TRAINING_STATIC_ZERO = os.getenv('TIACC_TRAINING_STATIC_ZERO', 'None')
if TIACC_TRAINING_STATIC_ZERO != 'None':
    from tilearn.llm.transformers import TrainingArguments
	
### 接口与标准huggingface一致
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))

Dynamic Zero

启动脚本

### TIACC DYNAMIC ZERO
### Open: TIACC_TRAINING_DYNAMIC_ZERO=1 and set TIACC_ZERO_STAGE/TIACC_ZERO_STAGE/TIACC_PLACEMENT/TIACC_SHARD_INIT/TIACC_CPU_INIT
### Close: TIACC_TRAINING_DYNAMIC_ZERO=0
export TIACC_TRAINING_DYNAMIC_ZERO=0
export TIACC_ZERO_STAGE=3 #work when TIACC_TRAINING_DYNAMIC_ZERO=1
export TIACC_PLACEMENT='cpu' #'cuda' #work when TIACC_TRAINING_DYNAMIC_ZERO=1
export TIACC_SHARD_INIT=0 #work when TIACC_TRAINING_DYNAMIC_ZERO=1
export TIACC_CPU_INIT=1 #work when TIACC_TRAINING_DYNAMIC_ZERO=1

if [ ${TIACC_TRAINING_DYNAMIC_ZERO} = 0 ]; then
  #USE_DS="--deepspeed=./ds_config_zero3.json"
  USE_DS="--deepspeed=${deepspeed_config_file}"
else
  USE_DS=""
fi

torchrun --nnodes 1 --nproc_per_node 8 run_clm.py \
    ${USE_DS} \
	...

代码

TIACC_TRAINING_DYNAMIC_ZERO = int(os.getenv('TIACC_TRAINING_DYNAMIC_ZERO', '0'))
from contextlib import nullcontext
if TIACC_TRAINING_DYNAMIC_ZERO == 1:
    from tilearn.llm.trainer import TrainerTiacc as Trainer
    from tilearn.llm import init as llm_init
    from tilearn.llm import get_config as llm_get_config
	

	
### init in main func
def main():
    if TIACC_TRAINING_DYNAMIC_ZERO == 1:
        llm_config = llm_get_config()
        llm_init_context = llm_init(init_in_cpu=llm_config.cpu_init,
                                    shard_init=llm_config.shard_init,
                                    model_dtype=torch.half)
									
### add init_context when model init
    init_context = llm_init_context if TIACC_TRAINING_DYNAMIC_ZERO == 1 else nullcontext
    with init_context():
		### 接口与标准huggingface一致
        model = LlamaForCausalLM.from_pretrained(
            model_args.model_name_or_path,
            config=config,
            low_cpu_mem_usage=False #True,
			...
        )
		
		
### use trainer
    ### 接口与标准huggingface一致
    trainer = Trainer(
        model=model,
        ...
    )

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

tilearn_llm-0.4.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.2 MB view hashes)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

tilearn_llm-0.4.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.2 MB view hashes)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

tilearn_llm-0.4.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.2 MB view hashes)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page