CKIP Transformers
Project description
CKIP Transformers
Git
PyPI
Documentation
Demo
Contributers
Wei-Yun Ma at CKIP (Maintainer).
Models
- Language Models
ALBERT Tiny: ckiplab/albert-tiny-chinese
ALBERT Base: ckiplab/albert-base-chinese
BERT Tiny: ckiplab/bert-tiny-chinese
BERT Base: ckiplab/bert-base-chinese
GPT2 Tiny: ckiplab/gpt2-tiny-chinese
GPT2 Base: ckiplab/gpt2-base-chinese
- NLP Task Models
ALBERT Tiny — Word Segmentation: ckiplab/albert-tiny-chinese-ws
ALBERT Tiny — Part-of-Speech Tagging: ckiplab/albert-tiny-chinese-pos
ALBERT Tiny — Named-Entity Recognition: ckiplab/albert-tiny-chinese-ner
ALBERT Base — Word Segmentation: ckiplab/albert-base-chinese-ws
ALBERT Base — Part-of-Speech Tagging: ckiplab/albert-base-chinese-pos
ALBERT Base — Named-Entity Recognition: ckiplab/albert-base-chinese-ner
BERT Tiny — Word Segmentation: ckiplab/bert-tiny-chinese-ws
BERT Tiny — Part-of-Speech Tagging: ckiplab/bert-tiny-chinese-pos
BERT Tiny — Named-Entity Recognition: ckiplab/bert-tiny-chinese-ner
BERT Base — Word Segmentation: ckiplab/bert-base-chinese-ws
BERT Base — Part-of-Speech Tagging: ckiplab/bert-base-chinese-pos
BERT Base — Named-Entity Recognition: ckiplab/bert-base-chinese-ner
Model Usage
pip install -U transformers
from transformers import (
BertTokenizerFast,
AutoModelForMaskedLM,
AutoModelForCausalLM,
AutoModelForTokenClassification,
)
# masked language model (ALBERT, BERT)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModelForMaskedLM.from_pretrained('ckiplab/albert-tiny-chinese') # or other models above
# casual language model (GPT2)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModelForCausalLM.from_pretrained('ckiplab/gpt2-base-chinese') # or other models above
# nlp task model
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModelForTokenClassification.from_pretrained('ckiplab/albert-tiny-chinese-ws') # or other models above
Model Fine-Tunning
https://github.com/huggingface/transformers/tree/master/examples
https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling
https://github.com/huggingface/transformers/tree/master/examples/pytorch/token-classification
python run_mlm.py \
--model_name_or_path ckiplab/albert-tiny-chinese \ # or other models above
--tokenizer_name bert-base-chinese \
...
python run_ner.py \
--model_name_or_path ckiplab/albert-tiny-chinese-ws \ # or other models above
--tokenizer_name bert-base-chinese \
...
Model Performance
Model |
#Parameters |
Perplexity† |
WS (F1)‡ |
POS (ACC)‡ |
NER (F1)‡ |
---|---|---|---|---|---|
ckiplab/albert-tiny-chinese |
4M |
4.80 |
96.66% |
94.48% |
71.17% |
ckiplab/albert-base-chinese |
11M |
2.65 |
97.33% |
95.30% |
79.47% |
ckiplab/bert-tiny-chinese |
12M |
8.07 |
96.98% |
95.11% |
74.21% |
ckiplab/bert-base-chinese |
102M |
1.88 |
97.60% |
95.67% |
81.18% |
ckiplab/gpt2-tiny-chinese |
4M |
16.94 |
– |
– |
– |
ckiplab/gpt2-base-chinese |
102M |
8.36 |
– |
– |
– |
voidful/albert_chinese_tiny |
4M |
74.93 |
– |
– |
– |
voidful/albert_chinese_base |
11M |
22.34 |
– |
– |
– |
bert-base-chinese |
102M |
2.53 |
– |
– |
– |
Training Corpus
- CNA: https://catalog.ldc.upenn.edu/LDC2011T13
- Chinese Gigaword Fifth Edition — CNA (Central News Agency) part.中文 Gigaword 第五版 — CNA(中央社)的部分。
- ASBC: http://asbc.iis.sinica.edu.tw
- Academia Sinica Balanced Corpus of Modern Chinese release 4.0.中央研究院漢語平衡語料庫第四版。
- OntoNotes: https://catalog.ldc.upenn.edu/LDC2013T19
Dataset |
#Documents |
#Lines |
#Characters |
Line Type |
---|---|---|---|---|
CNA |
2,559,520 |
13,532,445 |
1,219,029,974 |
Paragraph |
ZhWiki |
1,106,783 |
5,918,975 |
495,446,829 |
Paragraph |
ASBC |
19,247 |
1,395,949 |
17,572,374 |
Clause |
OntoNotes |
1,911 |
48,067 |
1,568,491 |
Sentence |
CNA+ZhWiki |
#Documents |
#Lines |
#Characters |
---|---|---|---|
Train |
3,606,303 |
18,986,238 |
4,347,517,682 |
Dev |
30,000 |
148,077 |
32,888,978 |
Test |
30,000 |
151,241 |
35,216,818 |
ASBC |
#Documents |
#Lines |
#Words |
#Characters |
---|---|---|---|---|
Train |
15,247 |
1,183,260 |
9,480,899 |
14,724,250 |
Dev |
2,000 |
52,677 |
448,964 |
741,323 |
Test |
2,000 |
160,012 |
1,315,129 |
2,106,799 |
OntoNotes |
#Documents |
#Lines |
#Characters |
#Named-Entities |
---|---|---|---|---|
Train |
1,511 |
43,362 |
1,367,658 |
68,947 |
Dev |
200 |
2,304 |
93,535 |
7,186 |
Test |
200 |
2,401 |
107,298 |
6,977 |
NLP Tools
(WS) Word Segmentation 斷詞
(POS) Part-of-Speech Tagging 詞性標記
(NER) Named Entity Recognition 實體辨識
Installation
pip install -U ckip-transformers
Requirements:
NLP Tools Usage
1. Import module
from ckip_transformers.nlp import CkipWordSegmenter, CkipPosTagger, CkipNerChunker
2. Load models
# Initialize drivers
ws_driver = CkipWordSegmenter(model="bert-base")
pos_driver = CkipPosTagger(model="bert-base")
ner_driver = CkipNerChunker(model="bert-base")
# Initialize drivers with custom checkpoints
ws_driver = CkipWordSegmenter(model_name="path_to_your_model")
pos_driver = CkipPosTagger(model_name="path_to_your_model")
ner_driver = CkipNerChunker(model_name="path_to_your_model")
# Use CPU
ws_driver = CkipWordSegmenter(device=-1)
# Use GPU:0
ws_driver = CkipWordSegmenter(device=0)
3. Run pipeline
# Input text
text = [
"傅達仁今將執行安樂死,卻突然爆出自己20年前遭緯來體育台封殺,他不懂自己哪裡得罪到電視台。",
"美國參議院針對今天總統布什所提名的勞工部長趙小蘭展開認可聽證會,預料她將會很順利通過參議院支持,成為該國有史以來第一位的華裔女性內閣成員。",
"空白 也是可以的~",
]
# Run pipeline
ws = ws_driver(text)
pos = pos_driver(ws)
ner = ner_driver(text)
# Enable sentence segmentation
ws = ws_driver(text, use_delim=True)
ner = ner_driver(text, use_delim=True)
# Disable sentence segmentation
pos = pos_driver(ws, use_delim=False)
# Use new line characters and tabs for sentence segmentation
pos = pos_driver(ws, delim_set='\n\t')
# Sets the batch size and maximum sentence length
ws = ws_driver(text, batch_size=256, max_length=128)
4. Show results
# Pack word segmentation and part-of-speech results
def pack_ws_pos_sentece(sentence_ws, sentence_pos):
assert len(sentence_ws) == len(sentence_pos)
res = []
for word_ws, word_pos in zip(sentence_ws, sentence_pos):
res.append(f"{word_ws}({word_pos})")
return "\u3000".join(res)
# Show results
for sentence, sentence_ws, sentence_pos, sentence_ner in zip(text, ws, pos, ner):
print(sentence)
print(pack_ws_pos_sentece(sentence_ws, sentence_pos))
for entity in sentence_ner:
print(entity)
print()
傅達仁今將執行安樂死,卻突然爆出自己20年前遭緯來體育台封殺,他不懂自己哪裡得罪到電視台。
傅達仁(Nb) 今(Nd) 將(D) 執行(VC) 安樂死(Na) ,(COMMACATEGORY) 卻(D) 突然(D) 爆出(VJ) 自己(Nh) 20(Neu) 年(Nd) 前(Ng) 遭(P) 緯來(Nb) 體育台(Na) 封殺(VC) ,(COMMACATEGORY) 他(Nh) 不(D) 懂(VK) 自己(Nh) 哪裡(Ncd) 得罪到(VC) 電視台(Nc) 。(PERIODCATEGORY)
NerToken(word='傅達仁', ner='PERSON', idx=(0, 3))
NerToken(word='20年', ner='DATE', idx=(18, 21))
NerToken(word='緯來體育台', ner='ORG', idx=(23, 28))
美國參議院針對今天總統布什所提名的勞工部長趙小蘭展開認可聽證會,預料她將會很順利通過參議院支持,成為該國有史以來第一位的華裔女性內閣成員。
美國(Nc) 參議院(Nc) 針對(P) 今天(Nd) 總統(Na) 布什(Nb) 所(D) 提名(VC) 的(DE) 勞工部長(Na) 趙小蘭(Nb) 展開(VC) 認可(VC) 聽證會(Na) ,(COMMACATEGORY) 預料(VE) 她(Nh) 將(D) 會(D) 很(Dfa) 順利(VH) 通過(VC) 參議院(Nc) 支持(VC) ,(COMMACATEGORY) 成為(VG) 該(Nes) 國(Nc) 有史以來(D) 第一(Neu) 位(Nf) 的(DE) 華裔(Na) 女性(Na) 內閣(Na) 成員(Na) 。(PERIODCATEGORY)
NerToken(word='美國參議院', ner='ORG', idx=(0, 5))
NerToken(word='今天', ner='LOC', idx=(7, 9))
NerToken(word='布什', ner='PERSON', idx=(11, 13))
NerToken(word='勞工部長', ner='ORG', idx=(17, 21))
NerToken(word='趙小蘭', ner='PERSON', idx=(21, 24))
NerToken(word='認可聽證會', ner='EVENT', idx=(26, 31))
NerToken(word='參議院', ner='ORG', idx=(42, 45))
NerToken(word='第一', ner='ORDINAL', idx=(56, 58))
NerToken(word='華裔', ner='NORP', idx=(60, 62))
空白 也是可以的~
空白(VH) (WHITESPACE) 也(D) 是(SHI) 可以(VH) 的(T) ~(FW)
NLP Tools Performance
CKIP Transformers v.s. Monpa & Jeiba
Tool |
WS (F1) |
POS (Acc) |
WS+POS (F1) |
NER (F1) |
---|---|---|---|---|
CKIP BERT Base |
97.60% |
95.67% |
94.19% |
81.18% |
CKIP ALBERT Base |
97.33% |
95.30% |
93.52% |
79.47% |
CKIP BERT Tiny |
96.98% |
95.08% |
93.13% |
74.20% |
CKIP ALBERT Tiny |
96.66% |
94.48% |
92.25% |
71.17% |
Monpa† |
92.58% |
– |
83.88% |
– |
Jeiba |
81.18% |
– |
– |
– |
CKIP Transformers v.s. CkipTagger
Tool |
WS (F1) |
POS (Acc) |
WS+POS (F1) |
NER (F1) |
---|---|---|---|---|
CKIP BERT Base |
97.84% |
96.46% |
94.91% |
79.20% |
CkipTagger |
97.33% |
97.20% |
94.75% |
77.87% |
License
Copyright (c) 2023 CKIP Lab under the GPL-3.0 License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file ckip-transformers-0.3.4.tar.gz
.
File metadata
- Download URL: ckip-transformers-0.3.4.tar.gz
- Upload date:
- Size: 32.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.16
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 03c611efabba588141842de650b059cb7a70c7852ef062ea6dfa78dcf8827c91 |
|
MD5 | a443d48734a50d435b964ece12fe7e32 |
|
BLAKE2b-256 | 9a315c34c19ae6a562a0319d95a9fa03ed674e5daf31618db59d6293838497f3 |
File details
Details for the file ckip_transformers-0.3.4-py3-none-any.whl
.
File metadata
- Download URL: ckip_transformers-0.3.4-py3-none-any.whl
- Upload date:
- Size: 26.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.16
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5e79fc0b4af7ad7742e8e10c091ce4fafb02f14ac0ae2ba3c9917875e1ff3c54 |
|
MD5 | 4db2630047bb6a12847c1b7c5c6d2ef0 |
|
BLAKE2b-256 | 25a1deeacef7742ba978227c186c75367a0d7413dd1f12c9b84a0e3c9efe495a |