Skip to main content

Model-based Korean Text Tokenizer in Python

Project description

PyKoTokenizer

PyKoTokenizer is a deep learning (RNN) model-based word tokenizer for Korean language.

Segmentation of Korean Words

Written Korean texts do employ white space characters. However, more often than not, Korean words occur in a text concatenated immediately to adjacent words without an intervening space character. This low degree of separation of words from each other in writing is due somewhat to an abundance of what linguists call "endoclitics" in the language.

As the language has been subjected to principled and rigorous study for a few decades, the issue of which strings of sounds, or letters, are words and which are not, has been settled among a small group of selected linguists. This kind of advancement has not been propagated to the general public yet, and nlp engineers working on Korean cannot but make do with whatever inconsistent grammars they happen to have access to. Thus, a major source of difficulty in developing competent Korean text processors has been, and still is, the notion of a word as the smallest syntactic unit.

How to install

Before using this package please make sure you have the following dependencies installed in your system.

  • Python >= 3.7
  • numpy >= 1.21.4
  • pandas >= 1.3.4
  • tensorflow >= 2.7.0
  • h5py >= 3.6.0

Use the following command to install the package:

pip install pykotokenizer

How to Use

Using KoTokenizer

from pykotokenizer import KoTokenizer

tokenizer = KoTokenizer()

korean_text = "김형호영화시장분석가는'1987'의네이버영화정보네티즌10점평에서언급된단어들을지난해12월27일부터올해1월10일까지통계프로그램R과KoNLP패키지로텍스트마이닝하여분석했다."

tokenizer(korean_text)

Output:

"김 형호 영화 시장 분석가 는 ' 1987 ' 의 네이버 영화 정보 네티즌 10 점평 에서 언급 된 단어 들 을 지난 해 12 월 27 일 부터 올해 1 월 10 일 까지 통계 프로그램 R 과 KoNLP 패키지 로 텍스트 마이닝 하여 분석 했다 ."

Using KoSpacing

from pykotokenizer import KoSpacing

spacing = KoSpacing()

korean_text = "김형호영화시장분석가는'1987'의네이버영화정보네티즌10점평에서언급된단어들을지난해12월27일부터올해1월10일까지통계프로그램R과KoNLP패키지로텍스트마이닝하여분석했다."

spacing(korean_text)

Output:

"김형호 영화시장 분석가는 '1987'의 네이버 영화 정보 네티즌 10점 평에서 언급된 단어들을 지난해 12월 27일부터 올해 1월 10일까지 통계 프로그램 R과 KoNLP 패키지로 텍스트마이닝하여 분석했다."

Credits

This package is a revamped and customized version of two different sources:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pykotokenizer-0.0.1.tar.gz (11.3 MB view details)

Uploaded Source

Built Distribution

pykotokenizer-0.0.1-py3-none-any.whl (11.3 MB view details)

Uploaded Python 3

File details

Details for the file pykotokenizer-0.0.1.tar.gz.

File metadata

  • Download URL: pykotokenizer-0.0.1.tar.gz
  • Upload date:
  • Size: 11.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.7.1 importlib_metadata/4.8.2 pkginfo/1.8.2 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.8.0

File hashes

Hashes for pykotokenizer-0.0.1.tar.gz
Algorithm Hash digest
SHA256 7d4d1071a4a305e5600683670071af26577856da118be0209c6e8fc0d9371ec6
MD5 9e41bfe63bdbec6a65491ce9326a6bfe
BLAKE2b-256 f022328499cefbcfc205654e0d51a3b7676a699268a5444641b97171b92f7649

See more details on using hashes here.

File details

Details for the file pykotokenizer-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: pykotokenizer-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 11.3 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.7.1 importlib_metadata/4.8.2 pkginfo/1.8.2 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.8.0

File hashes

Hashes for pykotokenizer-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 6813f34d149e2d77c51df3f53f18ab3f9fd05d8a2bc19f0851e59769435e9833
MD5 4c094142f035fe7f02d28ab968f9fddc
BLAKE2b-256 6159d583599835531cdc4cb387c68532beb3543642d3e39e5bfb0577cee306ce

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page