Light-weight sentence tokenizer for Japanese.
Project description
A light-weight sentence tokenizer for Japanese.
Sample Code:
from ja_sentence.tokenizer import tokenize
paragraph_str = "えー!?くれるの?本当にいいの…?嬉しい!!"
sentence_list = tokenize(input_str)
for sentence in sentence_list: print(sentence)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
ja_sentence-0.0.3.tar.gz
(2.5 kB
view hashes)
Built Distribution
Close
Hashes for ja_sentence-0.0.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4fdb2271a4426db92030ddfb1df11aca72075dd646847c63552a606550e7b27c |
|
MD5 | 6fbd2ab816d53735b5f7b54729f57111 |
|
BLAKE2b-256 | 947c482a170f813b564c85a36c9ad9370709e8bcbdda87554ca97706aca174c8 |