Natural language tokenizer for documents in Python
Project description
# nltokeniz.py
[![Build Status](https://travis-ci.org/raviqqe/nltokeniz.py.svg?branch=master)](https://travis-ci.org/raviqqe/nltokeniz.py) [![License](https://img.shields.io/badge/license-unlicense-lightgray.svg)](https://unlicense.org)
Natural language tokenizer for English and Japanese documents in Python
## License
[The Unlicense](https://unlicense.org)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
nltokeniz-0.0.1.tar.gz
(3.0 kB
view hashes)
Built Distributions
nltokeniz-0.0.1-py3.5.egg
(4.8 kB
view hashes)
Close
Hashes for nltokeniz-0.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 406808f9fd16d09984d856e09e5ad17ac4d42c7f2ed4e1487cf79fe4e9a0e2a0 |
|
MD5 | 27a141e586f07695482547011c1e29ea |
|
BLAKE2b-256 | b9e1af583e4e2e7f7a1415f249a4b7f1c05e6de5f5e8917c9c5fe682fd4218fa |