Easy-to-use and powerful NLP library with Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including Neural Search, Question Answering, Information Extraction and Sentiment Analysis end-to-end system.
Project description
简体中文🀄 | English🌎
Features | Supported Models | Installation | Quick Start | Community
PaddleNLP is a Large Language Model (LLM) development suite based on the PaddlePaddle deep learning framework, supporting efficient large model training, lossless compression, and high-performance inference on various hardware devices. With its simplicity and ultimate performance, PaddleNLP is dedicated to helping developers achieve efficient industrial applications of large models.
News 📢
-
2024.06.27 PaddleNLP v3.0 Beta:Embrace large models and experience a complete upgrade. With a unified large model toolchain, we achieve full-process access to domestically produced computing chips. We fully support industrial-level application processes for large models, such as PaddlePaddle's 4D parallel configuration, efficient fine-tuning strategies, efficient alignment algorithms, and high-performance reasoning. Our developed RsLoRA+ algorithm, full checkpoint storage mechanism Unified Checkpoint, and generalized support for FastFNN and FusedQKV all contribute to the training and inference of large models. We continuously support updates to mainstream models for providing efficient solutions.
-
2024.04.24 PaddleNLP v2.8:Our self-developed RsLoRA+ algorithm with extreme convergence significantly improves the convergence speed and training effectiveness of PEFT training. By introducing high-performance generation acceleration into the RLHF PPO algorithm, we have broken through the generation speed bottleneck in PPO training, achieving a significant lead in PPO training performance. We generally support multiple large model training performance optimization methods such as FastFFN and FusedQKV, making large model training faster and more stable.
-
2024.01.04 PaddleNLP v2.7: The LLM experience is fully upgraded, and the tool chain LLM entrance is unified. Unify the implementation code of pre-training, fine-tuning, compression, inference and deployment to the
PaddleNLP/llm
directory. The new LLM Toolchain Documentation provides one-stop guidance for users from getting started with LLM to business deployment and launch. The full breakpoint storage mechanism Unified Checkpoint greatly improves the versatility of LLM storage. Efficient fine-tuning upgrade supports the simultaneous use of efficient fine-tuning + LoRA, and supports QLoRA and other algorithms. -
2023.08.15 PaddleNLP v2.6: Release Full-process LLM toolchain , covering all aspects of pre-training, fine-tuning, compression, inference and deployment, providing users with end-to-end LLM solutions and one-stop development experience; built-in 4D parallel distributed Trainer, Efficient fine-tuning algorithm LoRA/Prefix Tuning, Self-developed INT8/INT4 quantization algorithm, etc.; fully supports LLaMA 1/2, BLOOM, ChatGLM 1/2, OPT and other mainstream LLMs.
Features
🔧 Integrated training and inference on multiple hardware platforms
Our development suit supports large model training and inference on multiple hardware platforms, including NVIDIA GPUs, Kunlun XPUs, Ascend NPUs, Enflame GCUs, and Hygon DCUs. The toolkit's interface allows for quick hardware switching, significantly reducing research and development costs associated with hardware transitions.
🚀 Efficient and easy-to-use pre-training
We support 4D high-performance training with data parallelism, sharding parallelism, tensor parallelism, and pipeline parallelism. The Trainer supports configurable distributed strategies, reducing the cost associated with complex distributed combinations. The Unified Checkpoint large model storage format supports dynamic scaling of model parameter distribution during training, thereby reducing the migration cost caused by hardware switching.
🤗 Efficient fine-tuning and alignment
The fine-tuning and alignment algorithms are deeply integrated with zero-padding data streams and high-performance FlashMask operators, reducing invalid data padding and computation during training, and significantly improving the throughput of fine-tuning and alignment training.
🎛️ Lossless compression and high-performance inference
The high-performance inference module of the large model toolkit incorporates dynamic insertion and operator fusion strategies throughout the entire process, greatly accelerating parallel inference speed. The underlying implementation details are encapsulated, enabling out-of-the-box high-performance parallel inference capabilities.
Support Models
Model | Pretrain | SFT | LoRA | Prefix Tuning | DPO | RLHF | Quantization | Weight convert |
---|---|---|---|---|---|---|---|---|
LLaMA | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
Qwen | ✅ | ✅ | ✅ | ✅ | ✅ | 🚧 | 🚧 | ✅ |
Mixtral | ✅ | ✅ | ✅ | ❌ | 🚧 | 🚧 | 🚧 | 🚧 |
Baichuan/Baichuan2 | ✅ | ✅ | ✅ | ✅ | ✅ | 🚧 | ✅ | ✅ |
ChatGLM-6B | ❌ | ✅ | ✅ | ✅ | 🚧 | 🚧 | ✅ | ❌ |
ChatGLM2/ChatGLM3 | ❌ | ✅ | ✅ | ✅ | 🚧 | 🚧 | ✅ | ✅ |
Bloom | ❌ | ✅ | ✅ | ✅ | 🚧 | 🚧 | ✅ | ✅ |
GPT-3 | ✅ | ✅ | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | ✅ |
OPT | 🚧 | ✅ | ✅ | 🚧 | 🚧 | 🚧 | 🚧 | ✅ |
- ✅: Supported
- 🚧: In Progress
- ❌: Not Supported
Detailed list 👉 Supported Model List
Installation
Prerequisites
- python >= 3.8
- paddlepaddle >= 3.0.0b0
Pip Installation
pip install --upgrade paddlenlp
or you can install the latest develop branch code with the following command:
pip install --pre --upgrade paddlenlp -f https://www.paddlepaddle.org.cn/whl/paddlenlp.html
More information about PaddlePaddle installation please refer to PaddlePaddle's Website.
Quick Start
Text generation with large language model
PaddleNLP provides a convenient and easy-to-use Auto API, which can quickly load models and Tokenizers. Here, we use the Qwen/Qwen2-0.5B
large model as an example for text generation:
>>> from paddlenlp.transformers import AutoTokenizer, AutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B")
>>> model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B", dtype="float16")
>>> input_features = tokenizer("你好!请自我介绍一下。", return_tensors="pd")
>>> outputs = model.generate(**input_features, max_length=128)
>>> tokenizer.batch_decode(outputs[0])
['我是一个AI语言模型,我可以回答各种问题,包括但不限于:天气、新闻、历史、文化、科学、教育、娱乐等。请问您有什么需要了解的吗?']
Pre-training for large language model
mkdir -p llm/data && cd llm/data
wget https://bj.bcebos.com/paddlenlp/models/transformers/llama/data/llama_openwebtext_100k.bin
wget https://bj.bcebos.com/paddlenlp/models/transformers/llama/data/llama_openwebtext_100k.idx
cd .. # change folder to PaddleNLP/llm
python -u -m paddle.distributed.launch --gpus "0,1,2,3,4,5,6,7" run_pretrain.py ./config/llama/pretrain_argument.json
SFT finetuning forlarge language model
mkdir -p llm/data && cd llm/data
wget https://bj.bcebos.com/paddlenlp/datasets/examples/AdvertiseGen.tar.gz && tar -zxvf AdvertiseGen.tar.gz
cd .. # change folder to PaddleNLP/llm
python -u -m paddle.distributed.launch --gpus "0,1,2,3,4,5,6,7" run_finetune.py ./config/llama/sft_argument.json
For more steps in the entire large model process, please refer to theLarge Model Full-Process Toolchain.
For more PaddleNLP content, please refer to:
- Model Library,which includes end-to-end usage of high-quality pre-trained models.
- Multi-scenario Examples,to understand how to use PaddleNLP to solve various NLP technical problems, including basic techniques, system applications, and extended applications.
- Interactive Tutorial,to quickly learn PaddleNLP on the free computing platform AI Studio.
Community
Slack
To connect with other users and contributors, welcome to join our Slack channel.
Scan the QR code below with your Wechat⬇️. You can access to official technical exchange group. Look forward to your participation.
Citation
If you find PaddleNLP useful in your research, please consider citing
@misc{=paddlenlp,
title={PaddleNLP: An Easy-to-use and High Performance NLP Library},
author={PaddleNLP Contributors},
howpublished = {\url{https://github.com/PaddlePaddle/PaddleNLP}},
year={2021}
}
Acknowledge
We have borrowed from Hugging Face's Transformers🤗 excellent design on pretrained models usage, and we would like to express our gratitude to the authors of Hugging Face and its open source community.
License
PaddleNLP is provided under the Apache-2.0 License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file paddlenlp-3.0.0b0-py3-none-any.whl
.
File metadata
- Download URL: paddlenlp-3.0.0b0-py3-none-any.whl
- Upload date:
- Size: 2.6 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.16
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 27ce3a3a14032e816f74cc30d1ff32eb057315e36f5a5e9de7e286f3162899a7 |
|
MD5 | 074983c1fa9efb340926c6d421206890 |
|
BLAKE2b-256 | da3f47ab4185d6b78702fc1770d3aa76c617469db1104583824136a697d53e03 |