Light-weight sentence tokenizer for Japanese.
Project description
A light-weight sentence tokenizer for Japanese.
Sample Code:
from ja_sentence.tokenizer import tokenize
paragraph_str = "えー!?くれるの?本当にいいの…?嬉しい!!"
sentence_list = tokenize(paragraph_str)
for sentence in sentence_list: print(sentence)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
ja_sentence-0.0.5.tar.gz
(2.5 kB
view hashes)
Built Distribution
Close
Hashes for ja_sentence-0.0.5-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | cf379e3d1498f58648532bc625d530e350cb79036f000e3abc2fa028507cefb7 |
|
MD5 | aa92b1685f299803f19446ff8efd3900 |
|
BLAKE2b-256 | cf0fc62609e6c77553632f4e8998c58b40d130814ee923979137720ebd6a3456 |