TILEARN for LLM
Project description
Tilearn.llm使用说明
1. CUDA Kernel(以LLAMA为例)
支持显卡:Ampere, Ada, or Hopper GPUs (e.g., A100, A800, H100, H800)
新版本
新版本Dependencies: pytorch >= 2.0.0
该版本完全兼容huggingface接口,不需要额外的转模型操作
LLAMA1/LLAMA2 A800 16GPU seq=1024相比deepspeed zero2训练加速约20%
cuda kernel使用方法-启动脚本修改如下
### TIACC CUDA Kernel
### Open: TIACC_TRAINING_CUDA_KERNEL=1
### Close: TIACC_TRAINING_CUDA_KERNEL=0
export TIACC_TRAINING_CUDA_KERNEL=1
cuda kernel使用方法-代码修改如下
### TIACC
TIACC_TRAINING_CUDA_KERNEL = int(os.getenv('TIACC_TRAINING_CUDA_KERNEL', '0'))
if TIACC_TRAINING_CUDA_KERNEL == 1:
from tilearn.llm.transformers import LlamaForCausalLM
### 模型接口与标准huggingface一致
model = LlamaForCausalLM.from_pretrained(...)
### TIACC
TIACC_TRAINING_CUDA_KERNEL = int(os.getenv('TIACC_TRAINING_CUDA_KERNEL', '0'))
if TIACC_TRAINING_CUDA_KERNEL == 1:
from tilearn.llm.transformers import AutoModelForCausalLM
### 模型接口与标准huggingface一致
model = AutoModelForCausalLM.from_pretrained(...)
2. Static Zero
适用场景:在deepspeed zero1、zero2、zero3、offload、int8等不同优化状态间切换
启动脚本修改如下
### TIACC STATIC ZERO
### Open: TIACC_TRAINING_CUDA_KERNEL='O2'
### support 'O2' / 'O2.5' / 'O3' / 'O3.5' / 'O3_Q8'(doing)
### Close: TIACC_TRAINING_CUDA_KERNEL='None'
export TIACC_TRAINING_STATIC_ZERO='None' #'O2'
代码修改如下
from transformers import HfArgumentParser
TIACC_TRAINING_STATIC_ZERO = os.getenv('TIACC_TRAINING_STATIC_ZERO', 'None')
if TIACC_TRAINING_STATIC_ZERO != 'None':
from tilearn.llm.transformers import TrainingArguments
### 接口与标准huggingface一致
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distributions
Close
Hashes for tilearn_llm-0.6.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | a3ed22588dfa5b341451f7a600e8e627a40166a3e3851e6db1882a8f6cd6b318 |
|
MD5 | 4d3b005a82a4ebdf132f43a3be5b4700 |
|
BLAKE2b-256 | d44e26802ed42982a15a84b643b482870cce4aaf23b85935595cbe271a4f1b84 |
Close
Hashes for tilearn_llm-0.6.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7ecff0b11b0e096d6f79fe0d01cbad929973b13f1ec7b86c29b215f01819d87e |
|
MD5 | 2950cfeb2d439023081094ec58ce5418 |
|
BLAKE2b-256 | db4f923a7a9d3d2cc6ad54b24d1decfc1cca7e94b44d3178780b25cb4742cbfc |
Close
Hashes for tilearn_llm-0.6.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3a0036858e7c467408b2d7c271bf6d939e157f77a5f29c7dd6551ccd6ddfc4b4 |
|
MD5 | 2ff8248a4f1e22a4639beacda85cb114 |
|
BLAKE2b-256 | 1805985a395d5f2daed1102515bfc4ce7aba1a4ce36da7b23f9e8dc54a75d2aa |