Tokenizer POS-tagger and Dependency-parser for Classical Chinese
Project description
SuPar-Kanbun
Tokenizer, POS-Tagger and Dependency-Parser for Classical Chinese Texts (漢文/文言文) with spaCy, Transformers and SuPar.
Basic usage
>>> import suparkanbun
>>> nlp=suparkanbun.load()
>>> doc=nlp("不入虎穴不得虎子")
>>> print(type(doc))
<class 'spacy.tokens.doc.Doc'>
>>> print(suparkanbun.to_conllu(doc))
# text = 不入虎穴不得虎子
1 不 不 ADV v,副詞,否定,無界 Polarity=Neg 2 advmod _ Gloss=not|SpaceAfter=No
2 入 入 VERB v,動詞,行為,移動 _ 0 root _ Gloss=enter|SpaceAfter=No
3 虎 虎 NOUN n,名詞,主体,動物 _ 4 nmod _ Gloss=tiger|SpaceAfter=No
4 穴 穴 NOUN n,名詞,固定物,地形 Case=Loc 2 obj _ Gloss=cave|SpaceAfter=No
5 不 不 ADV v,副詞,否定,無界 Polarity=Neg 6 advmod _ Gloss=not|SpaceAfter=No
6 得 得 VERB v,動詞,行為,得失 _ 2 parataxis _ Gloss=get|SpaceAfter=No
7 虎 虎 NOUN n,名詞,主体,動物 _ 8 nmod _ Gloss=tiger|SpaceAfter=No
8 子 子 NOUN n,名詞,人,関係 _ 6 obj _ Gloss=child|SpaceAfter=No
>>> import deplacy
>>> deplacy.render(doc)
不 ADV <════╗ advmod
入 VERB ═══╗═╝═╗ ROOT
虎 NOUN <╗ ║ ║ nmod
穴 NOUN ═╝<╝ ║ obj
不 ADV <════╗ ║ advmod
得 VERB ═══╗═╝<╝ parataxis
虎 NOUN <╗ ║ nmod
子 NOUN ═╝<╝ obj
suparkanbun.load()
has two options suparkanbun.load(BERT="roberta-classical-chinese-base-char",Danku=False)
. With the option Danku=True
the pipeline tries to segment sentences automatically. Available BERT
options are:
BERT="roberta-classical-chinese-base-char"
utilizes roberta-classical-chinese-base-char (default)BERT="roberta-classical-chinese-large-char"
utilizes roberta-classical-chinese-large-charBERT="guwenbert-base"
utilizes GuwenBERT-baseBERT="guwenbert-large"
utilizes GuwenBERT-largeBERT="sikubert"
utilizes SikuBERTBERT="sikuroberta"
utilizes SikuRoBERTa
Installation for Linux
pip3 install suparkanbun --user
Installation for Cygwin64
Make sure to get python37-devel
python37-pip
python37-cython
python37-numpy
python37-wheel
gcc-g++
mingw64-x86_64-gcc-g++
git
curl
make
cmake
packages, and then:
curl -L https://raw.githubusercontent.com/KoichiYasuoka/CygTorch/master/installer/supar.sh | sh
pip3.7 install suparkanbun
Installation for Jupyter Notebook (Google Colaboratory)
!pip install suparkanbun
Try notebook for Google Colaboratory.
Author
Koichi Yasuoka (安岡孝一)
Reference
Koichi Yasuoka, Christian Wittern, Tomohiko Morioka, Takumi Ikeda, Naoki Yamazaki, Yoshihiro Nikaido, Shingo Suzuki, Shigeki Moro, Kazunori Fujita: Designing Universal Dependencies for Classical Chinese and Its Application, Journal of Information Processing Society of Japan, Vol.63, No.2 (February 2022), pp.355-363.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file suparkanbun-1.5.4-py3-none-any.whl
.
File metadata
- Download URL: suparkanbun-1.5.4-py3-none-any.whl
- Upload date:
- Size: 957.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 368d07ea47564d8a59c2eda2617ec1595d0504f8c6270ece7270e6273620c9cb |
|
MD5 | 1d9d50d92c555baa863e0253a7212865 |
|
BLAKE2b-256 | cf368d73ddd05dba535d55b2849a151a5475a85a3d7a3a6587587f85d93aa49b |