Skip to main content

Easy-to-use and powerful NLP library with Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including Neural Search, Question Answering, Information Extraction and Sentiment Analysis end-to-end system.

Project description

简体中文🀄 | English🌎


Features | Installation | Quick Start | API Reference | Community

PaddleNLP is an easy-to-use and powerful NLP library with Awesome pre-trained model zoo, supporting wide-range of NLP tasks from research to industrial applications.

News 📢

  • 🔥 2022.12.9 PaddleNLP v2.4.5

    • 📃 Release UIE-X, an universal information extraction model which supports both document and text inputs.
    • 🔨 Industrial application: Release Complete Solution of Information Extraction, supports most extraction tasks, and we provide a comprehensive and easy-to-use fine-tuning customization workflow。
  • 🔥 2022.11.28 PaddleNLP v2.4.4

  • 🔥 2022.11.17 PaddleNLP v2.4.3 Released!

    • 💪 Framework upgrade: 🏆 Upgrade Prompt API, supporting more flexible prompt definitions and winning the 1st place in FewCLUE; 🕸 Upgrade Trainer API, supporting Seq2seqTrainer, IterableDataset as well as bf16 and sharding strategies.
    • 🔨 Industrial application: 🏃 Upgrade for Universal Information Extraction. Support quantization aware training and INT8 precision inference for inference performance boost.
  • 🔥 2022.10.27 PaddleNLP v2.4.2 Released!

    • NLG Upgrade: 📄 Release Solution of Text Summarization based on Pegasus;❓ Release Solution of Problem Generation, providing general problem generation pre-trained model based on Baidu's UNIMO Text and large-scale multi domain problem generation dataset. Supporting high-performance inference ability based on FasterGeneration , and covering the whole process of training , inference and deployment.
  • 🔥 2022.10.14 PaddleNLP v2.4.1 Released!

    • 🧾 Release multilingual/cross-lingual pre-trained models ERNIE-Layout which achieves new SOTA results in 11 downstream tasks. DocPrompt 🔖 based on ERNIE-Layout is also released which has the ability for multilingual document information extraction and question ansering.
  • 🔥 2022.9.6 PaddleNLPv2.4 Released!

    • 💎 NLP Tool: Pipelines released. Supports for fast construction of search engine and question answering systems, and it is expandable to all kinds of NLP systems. Building end-to-end pipelines for NLP tasks like playing Lego!

    • 🔨 Industrial application: Release Complete Solution of Text Classification covering various scenarios of text classification: multi-class, multi-label and hierarchical, it also supports for few-shot learning and the training and optimization of TrustAI. Upgrade for Universal Information Extraction and release UIE-M, support both Chinese and English information extraction in a single model; release the data distillation solution for UIE to break the bottleneck of time-consuming of inference.

    • 🍭 AIGC: Release code generation SOTA model CodeGen, supports for multiple programming languages code generation. Integrate Text to Image Model DALL·E Mini, Disco Diffusion, Stable Diffusion, let's play and have some fun! Release Chinese Text Summarization Application, first release of chinese text summarization model pretrained on a large scale of corpus, it can be use via Taskflow API and support for finetuning on your own data.

    • 💪 Framework upgrade: Release Auto Model Compression API, supports for pruning and quantization automatically, lower the barriers of model compression; Release Few-shot Prompt, includes the algorithms such as PET, P-Tuning and RGL.

Features

📦 Out-of-Box NLP Toolset

🤗 Awesome Chinese Model Zoo

🎛️ Industrial End-to-end System

🚀 High Performance Distributed Training and Inference

Out-of-Box NLP Toolset

Taskflow aims to provide off-the-shelf NLP pre-built task covering NLU and NLG technique, in the meanwhile with extreamly fast infernece satisfying industrial scenario.

taskflow1

For more usage please refer to Taskflow Docs.

Awesome Chinese Model Zoo

🀄 Comprehensive Chinese Transformer Models

We provide 45+ network architectures and over 500+ pretrained models. Not only includes all the SOTA model like ERNIE, PLATO and SKEP released by Baidu, but also integrates most of the high-quality Chinese pretrained model developed by other organizations. Use AutoModel API to ⚡SUPER FAST⚡ download pretrained models of different architecture. We welcome all developers to contribute your Transformer models to PaddleNLP!

from paddlenlp.transformers import *

ernie = AutoModel.from_pretrained('ernie-3.0-medium-zh')
bert = AutoModel.from_pretrained('bert-wwm-chinese')
albert = AutoModel.from_pretrained('albert-chinese-tiny')
roberta = AutoModel.from_pretrained('roberta-wwm-ext')
electra = AutoModel.from_pretrained('chinese-electra-small')
gpt = AutoModelForPretraining.from_pretrained('gpt-cpm-large-cn')

Due to the computation limitation, you can use the ERNIE-Tiny light models to accelerate the deployment of pretrained models.

# 6L768H
ernie = AutoModel.from_pretrained('ernie-3.0-medium-zh')
# 6L384H
ernie = AutoModel.from_pretrained('ernie-3.0-mini-zh')
# 4L384H
ernie = AutoModel.from_pretrained('ernie-3.0-micro-zh')
# 4L312H
ernie = AutoModel.from_pretrained('ernie-3.0-nano-zh')

Unified API experience for NLP task like semantic representation, text classification, sentence matching, sequence labeling, question answering, etc.

import paddle
from paddlenlp.transformers import *

tokenizer = AutoTokenizer.from_pretrained('ernie-3.0-medium-zh')
text = tokenizer('natural language processing')

# Semantic Representation
model = AutoModel.from_pretrained('ernie-3.0-medium-zh')
sequence_output, pooled_output = model(input_ids=paddle.to_tensor([text['input_ids']]))
# Text Classificaiton and Matching
model = AutoModelForSequenceClassification.from_pretrained('ernie-3.0-medium-zh')
# Sequence Labeling
model = AutoModelForTokenClassification.from_pretrained('ernie-3.0-medium-zh')
# Question Answering
model = AutoModelForQuestionAnswering.from_pretrained('ernie-3.0-medium-zh')

Wide-range NLP Task Support

PaddleNLP provides rich examples covering mainstream NLP task to help developers accelerate problem solving. You can find our powerful transformer Model Zoo, and wide-range NLP application exmaples with detailed instructions.

Also you can run our interactive Notebook tutorial on AI Studio, a powerful platform with FREE computing resource.

PaddleNLP Transformer model summary (click to show details)
Model Sequence Classification Token Classification Question Answering Text Generation Multiple Choice
ALBERT
BART
BERT
BigBird
BlenderBot
ChineseBERT
ConvBERT
CTRL
DistilBERT
ELECTRA
ERNIE
ERNIE-CTM
ERNIE-Doc
ERNIE-GEN
ERNIE-Gram
ERNIE-M
FNet
Funnel-Transformer
GPT
LayoutLM
LayoutLMv2
LayoutXLM
LUKE
mBART
MegatronBERT
MobileBERT
MPNet
NEZHA
PP-MiniLM
ProphetNet
Reformer
RemBERT
RoBERTa
RoFormer
SKEP
SqueezeBERT
T5
TinyBERT
UnifiedTransformer
XLNet

For more pretrained model usage, please refer to Transformer API Docs.

Industrial End-to-end System

We provide high value scenarios including information extraction, semantic retrieval, questionn answering high-value.

For more details industial cases please refer to Applications.

🔍 Neural Search System

For more details please refer to Neural Search.

❓ Question Answering System

We provide question answering pipeline which can support FAQ system, Document-level Visual Question answering system based on 🚀RocketQA.

For more details please refer to Question Answering and Document VQA.

💌 Opinion Extraction and Sentiment Analysis

We build an opinion extraction system for product review and fine-grained sentiment analysis based on SKEP Model.

For more details please refer to Sentiment Analysis.

🎙️ Speech Command Analysis

Integrated ASR Model, Information Extraction, we provide a speech command analysis pipeline that show how to use PaddleNLP and PaddleSpeech to solve Speech + NLP real scenarios.

For more details please refer to Speech Command Analysis.

High Performance Distributed Training and Inference

⚡ FastTokenizer: High Performance Text Preprocessing Library

AutoTokenizer.from_pretrained("ernie-3.0-medium-zh", use_fast=True)

Set use_fast=True to use C++ Tokenizer kernel to achieve 100x faster on text pre-processing. For more usage please refer to FastTokenizer.

⚡ FasterGeneration: High Perforance Generation Library

model = GPTLMHeadModel.from_pretrained('gpt-cpm-large-cn')
...
outputs, _ = model.generate(
    input_ids=inputs_ids, max_length=10, decode_strategy='greedy_search',
    use_faster=True)

Set use_faster=True to achieve 5x speedup for Transformer, GPT, BART, PLATO, UniLM text generation. For more usage please refer to FasterGeneration.

🚀 Fleet: 4D Hybrid Distributed Training

For more super large-scale model pre-training details please refer to GPT-3.

Installation

Prerequisites

  • python >= 3.7
  • paddlepaddle >= 2.2

More information about PaddlePaddle installation please refer to PaddlePaddle's Website.

Python pip Installation

pip install --upgrade paddlenlp

Quick Start

Taskflow aims to provide off-the-shelf NLP pre-built task covering NLU and NLG scenario, in the meanwhile with extreamly fast infernece satisfying industrial applications.

from paddlenlp import Taskflow

# Chinese Word Segmentation
seg = Taskflow("word_segmentation")
seg("第十四届全运会在西安举办")
>>> ['第十四届', '全运会', '在', '西安', '举办']

# POS Tagging
tag = Taskflow("pos_tagging")
tag("第十四届全运会在西安举办")
>>> [('第十四届', 'm'), ('全运会', 'nz'), ('在', 'p'), ('西安', 'LOC'), ('举办', 'v')]

# Named Entity Recognition
ner = Taskflow("ner")
ner("《孤女》是2010年九州出版社出版的小说,作者是余兼羽")
>>> [('《', 'w'), ('孤女', '作品类_实体'), ('》', 'w'), ('是', '肯定词'), ('2010年', '时间类'), ('九州出版社', '组织机构类'), ('出版', '场景事件'), ('的', '助词'), ('小说', '作品类_概念'), (',', 'w'), ('作者', '人物类_概念'), ('是', '肯定词'), ('余兼羽', '人物类_实体')]

# Dependency Parsing
ddp = Taskflow("dependency_parsing")
ddp("9月9日上午纳达尔在亚瑟·阿什球场击败俄罗斯球员梅德韦杰夫")
>>> [{'word': ['9月9日', '上午', '纳达尔', '在', '亚瑟·阿什球场', '击败', '俄罗斯', '球员', '梅德韦杰夫'], 'head': [2, 6, 6, 5, 6, 0, 8, 9, 6], 'deprel': ['ATT', 'ADV', 'SBV', 'MT', 'ADV', 'HED', 'ATT', 'ATT', 'VOB']}]

# Sentiment Analysis
senta = Taskflow("sentiment_analysis")
senta("这个产品用起来真的很流畅,我非常喜欢")
>>> [{'text': '这个产品用起来真的很流畅,我非常喜欢', 'label': 'positive', 'score': 0.9938690066337585}]

API Reference

  • Support LUGE dataset loading and compatible with Hugging Face Datasets. For more details please refer to Dataset API.
  • Using Hugging Face style API to load 500+ selected transformer models and download with fast speed. For more information please refer to Transformers API.
  • One-line of code to load pre-trained word embedding. For more usage please refer to Embedding API.

Please find all PaddleNLP API Reference from our readthedocs.

Community

Slack

To connect with other users and contributors, welcome to join our Slack channel.

WeChat

Scan the QR code below with your Wechat⬇️. You can access to official technical exchange group. Look forward to your participation.

Citation

If you find PaddleNLP useful in your research, please consider cite

@misc{=paddlenlp,
    title={PaddleNLP: An Easy-to-use and High Performance NLP Library},
    author={PaddleNLP Contributors},
    howpublished = {\url{https://github.com/PaddlePaddle/PaddleNLP}},
    year={2021}
}

Acknowledge

We have borrowed from Hugging Face's Transformers🤗 excellent design on pretrained models usage, and we would like to express our gratitude to the authors of Hugging Face and its open source community.

License

PaddleNLP is provided under the Apache-2.0 License.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

paddlenlp-2.4.7.tar.gz (1.7 MB view details)

Uploaded Source

Built Distribution

paddlenlp-2.4.7-py3-none-any.whl (2.1 MB view details)

Uploaded Python 3

File details

Details for the file paddlenlp-2.4.7.tar.gz.

File metadata

  • Download URL: paddlenlp-2.4.7.tar.gz
  • Upload date:
  • Size: 1.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.15

File hashes

Hashes for paddlenlp-2.4.7.tar.gz
Algorithm Hash digest
SHA256 5785b94ad8b2e465ab0e27a40a5888a4d84b592f4de7e2b749fa8a8f193ba389
MD5 e20c8e58f7f0fc85a497025260ab92bb
BLAKE2b-256 5e305833dfe47d9778e159a30f11631ae35a04314be172512a825b2c11c2c0d8

See more details on using hashes here.

File details

Details for the file paddlenlp-2.4.7-py3-none-any.whl.

File metadata

  • Download URL: paddlenlp-2.4.7-py3-none-any.whl
  • Upload date:
  • Size: 2.1 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.15

File hashes

Hashes for paddlenlp-2.4.7-py3-none-any.whl
Algorithm Hash digest
SHA256 31f04e99ae7ef348bcf6d65ab5ed57d0f07b818b1b210bf159cdbffe8b56149a
MD5 274b57a2b2857bbfe5fce46e933101ac
BLAKE2b-256 15b694067aaa45d7ee06335fdf9f29e0d46fe421d93c6a55f9643c2bd6fad7b0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page