Skip to main content

Easy-to-use and powerful NLP library with Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including Neural Search, Question Answering, Information Extraction and Sentiment Analysis end-to-end system.

Project description


News | Highlights | Installation | Quickstart | Community

PaddleFormers is a Transformer model library built on the PaddlePaddle deep learning framework, delivering both ease of use and high-performance capabilities. It provides a unified model definition interface, modular training components, and comprehensive distributed training strategies specifically designed for large language model development pipelines. This enables developers to train large models efficiently with minimal complexity, making it suitable for diverse scenarios ranging from academic research to industrial applications.

News

[2025/06/28] 🎉 PaddleFormers 0.1 is officially released! This initial version supports SFT/DPO training paradigms, configurable distributed training via unified Trainer API, and integrates PEFT, MergeKit, and Quantization APIs for diverse LLM applications.

Highlights

⚙️ Simplified Distributed Training

Implements 4D parallel strategies through unified Trainer API, lowering the barrier to distributed LLM training.

🛠 Efficient Post-Training

Integrates Packing dataflow and FlashMask operators for SFT/DPO training, eliminating padding waste and boosting throughput.

💾 Industrial Storage Solution

Features Unified Checkpoint storage tools for LLMs, enabling training resumption and dynamic resource scaling. Additionally implements asynchronous storage (up to 95% faster) and Optimizer State Quantization (78% storage reduction), ensuring industrial training meets both efficiency and stability requirements.

Installation

Requires Python 3.8+ and PaddlePaddle 3.1+.

# Install via pip
pip install paddleformers

# Install development version
git clone https://github.com/PaddlePaddle/PaddleFormers.git
cd PaddleFormers
pip install -e .

Quickstart

Text Generation

This example shows how to load Qwen model for text generation with PaddleFormers Auto API:

from paddleformers.transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B")
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B", dtype="bfloat16")
input_features = tokenizer("Give me a short introduction to large language model.", return_tensors="pd")
outputs = model.generate(**input_features, max_new_tokens=128)
print(tokenizer.batch_decode(outputs[0], skip_special_tokens=True))

SFT Training

Getting started with supervised fine-tuning (SFT) using PaddleFormers:

from paddleformers.trl import SFTConfig, SFTTrainer
from datasets import load_dataset
dataset = load_dataset("ZHUI/alpaca_demo", split="train")

training_args = SFTConfig(output_dir="Qwen/Qwen2.5-0.5B-SFT", device="gpu")
trainer = SFTTrainer(
    args=training_args,
    model="Qwen/Qwen2.5-0.5B-Instruct",
    train_dataset=dataset,
)
trainer.train()

Community

We welcome all contributions! See CONTRIBUTING.md for guidelines.

License

This repository's source code is available under the Apache 2.0 License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

paddleformers-0.2.4-py3-none-any.whl (1.3 MB view details)

Uploaded Python 3

File details

Details for the file paddleformers-0.2.4-py3-none-any.whl.

File metadata

  • Download URL: paddleformers-0.2.4-py3-none-any.whl
  • Upload date:
  • Size: 1.3 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.18

File hashes

Hashes for paddleformers-0.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 7975e5796ee51a700f812bfac827e690324cdf5edc9d2b97efdc8b19ed79cd80
MD5 1a1b665fe80193ee0694aec58b788a71
BLAKE2b-256 63efddd3f5351cdc335e888aaa8f005957d974bd81416f9f4c923c7b11455e69

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page