Light-weight sentence tokenizer for Japanese.
Project description
A light-weight sentence tokenizer for Japanese.
Sample Code:
from ja_sentence.tokenizer import tokenize
paragraph_str = "えー!?くれるの?本当にいいの…?嬉しい!!"
sentence_list = tokenize(paragraph_str)
for sentence in sentence_list: print(sentence)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
ja_sentence-0.0.4.tar.gz
(2.5 kB
view hashes)
Built Distribution
Close
Hashes for ja_sentence-0.0.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1c4ab00ac8e502d9a253c72e1743252baff95344ace6e4c588f7354f38cfb940 |
|
MD5 | c7714a9feb6524631eae4bba1f4481f8 |
|
BLAKE2b-256 | 78f34965da29b0ef122a61bd2c3aca44ac53967ae6bcce3977103c6866576abe |