Skip to main content

Tokenizer POS-tagger and Dependency-parser for Classical Chinese

Project description

Current PyPI packages

SuPar-Kanbun

Tokenizer, POS-Tagger and Dependency-Parser for Classical Chinese Texts (漢文/文言文) with spaCy, Transformers and SuPar.

Basic usage

>>> import suparkanbun
>>> nlp=suparkanbun.load()
>>> doc=nlp("不入虎穴不得虎子")
>>> print(type(doc))
<class 'spacy.tokens.doc.Doc'>
>>> print(suparkanbun.to_conllu(doc))
# text = 不入虎穴不得虎子
1			ADV	v,副詞,否定,無界	Polarity=Neg	2	advmod	_	Gloss=not|SpaceAfter=No
2			VERB	v,動詞,行為,移動	_	0	root	_	Gloss=enter|SpaceAfter=No
3			NOUN	n,名詞,主体,動物	_	4	nmod	_	Gloss=tiger|SpaceAfter=No
4			NOUN	n,名詞,固定物,地形	Case=Loc	2	obj	_	Gloss=cave|SpaceAfter=No
5			ADV	v,副詞,否定,無界	Polarity=Neg	6	advmod	_	Gloss=not|SpaceAfter=No
6			VERB	v,動詞,行為,得失	_	2	parataxis	_	Gloss=get|SpaceAfter=No
7			NOUN	n,名詞,主体,動物	_	8	nmod	_	Gloss=tiger|SpaceAfter=No
8			NOUN	n,名詞,,関係	_	6	obj	_	Gloss=child|SpaceAfter=No

>>> import deplacy
>>> deplacy.render(doc)
 ADV  <════╗   advmod
 VERB ═══╗═╝═╗ ROOT
 NOUN <     nmod
 NOUN ═╝<    obj
 ADV  <════╗  advmod
 VERB ═══╗═╝< parataxis
 NOUN <      nmod
 NOUN ═╝<     obj

suparkanbun.load() has two options suparkanbun.load(BERT="roberta-classical-chinese-base-char",Danku=False). With the option Danku=True the pipeline tries to segment sentences automatically. Available BERT options are:

Installation for Linux

pip3 install suparkanbun --user

Installation for Cygwin64

Make sure to get python37-devel python37-pip python37-cython python37-numpy python37-wheel gcc-g++ mingw64-x86_64-gcc-g++ git curl make cmake packages, and then:

curl -L https://raw.githubusercontent.com/KoichiYasuoka/CygTorch/master/installer/supar.sh | sh
pip3.7 install suparkanbun

Installation for Jupyter Notebook (Google Colaboratory)

!pip install suparkanbun 

Try notebook for Google Colaboratory.

Author

Koichi Yasuoka (安岡孝一)

Reference

Koichi Yasuoka, Christian Wittern, Tomohiko Morioka, Takumi Ikeda, Naoki Yamazaki, Yoshihiro Nikaido, Shingo Suzuki, Shigeki Moro, Kazunori Fujita: Designing Universal Dependencies for Classical Chinese and Its Application, Journal of Information Processing Society of Japan, Vol.63, No.2 (February 2022), pp.355-363.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

suparkanbun-1.5.4-py3-none-any.whl (957.6 kB view details)

Uploaded Python 3

File details

Details for the file suparkanbun-1.5.4-py3-none-any.whl.

File metadata

  • Download URL: suparkanbun-1.5.4-py3-none-any.whl
  • Upload date:
  • Size: 957.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.2

File hashes

Hashes for suparkanbun-1.5.4-py3-none-any.whl
Algorithm Hash digest
SHA256 368d07ea47564d8a59c2eda2617ec1595d0504f8c6270ece7270e6273620c9cb
MD5 1d9d50d92c555baa863e0253a7212865
BLAKE2b-256 cf368d73ddd05dba535d55b2849a151a5475a85a3d7a3a6587587f85d93aa49b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page