Yoctol Natural Language Tokenizer
Project description
# tokenizer-hub
yoctol 乂卍oO煞氣ㄟtokenizerOo卍乂
Tokenizers have the same interface of Jieba:
`python from tokenizer_hub import XXX_tokenizer tokenizer = XXX_tokenizer() tokenizer.lcut('我来到北京清华大学') > ['我', '来到', '北京', '清华大学'] `
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
tokenizer-hub-0.0.0.tar.gz
(7.0 kB
view hashes)
Built Distributions
tokenizer_hub-0.0.0-py3.5.egg
(33.9 kB
view hashes)
Close
Hashes for tokenizer_hub-0.0.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | a96d01e2c4243c5e9716216d8b6765b522949d90ff5af00fa205197881352c32 |
|
MD5 | 75f83be135c66bd8dc89ed7255c5427a |
|
BLAKE2b-256 | e06dec703a2d0717cc4c14b556cea01dd0e24cac6dfb5f0f5386ff161973901e |