Natural language tokenizer for documents in Python
Project description
# nltokeniz.py
[![License](https://img.shields.io/badge/license-unlicense-lightgray.svg)](https://unlicense.org)
Natural language tokenizer for English and Japanese documents in Python
## License
[The Unlicense](https://unlicense.org)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
nltokeniz-0.0.0.tar.gz
(2.9 kB
view hashes)
Built Distributions
nltokeniz-0.0.0-py3.5.egg
(6.4 kB
view hashes)
Close
Hashes for nltokeniz-0.0.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7c9bb3af8f76a7c1eee975f4b70a0d243cd653da5bfabf26ed128c33d0331fc0 |
|
MD5 | 66b5cfd9c61b9d80ae1f5b98a5128275 |
|
BLAKE2b-256 | 80f38b36b97d6bde7c8c83a8baf9604ad7a8559d579ec6b68354cb1f7d4abc8e |