Natural language tokenizer for documents in Python
Project description
# nltokeniz.py
[![PyPI version](https://badge.fury.io/py/nltokeniz.svg)](https://badge.fury.io/py/nltokeniz) [![Python versions](https://img.shields.io/pypi/pyversions/nltokeniz.svg)](setup.py) [![Build Status](https://travis-ci.org/raviqqe/nltokeniz.py.svg?branch=master)](https://travis-ci.org/raviqqe/nltokeniz.py) [![License](https://img.shields.io/badge/license-unlicense-lightgray.svg)](https://unlicense.org)
Natural language tokenizer for English and Japanese documents in Python
## License
[The Unlicense](https://unlicense.org)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
nltokeniz-0.0.3.tar.gz
(3.1 kB
view hashes)
Built Distributions
nltokeniz-0.0.3-py3.6.egg
(5.0 kB
view hashes)
Close
Hashes for nltokeniz-0.0.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3cde848685369714239d2570e6088b299fe25353866b8380c1e45af6c353286a |
|
MD5 | efb944a4b6b1dd2eb0396ac01cb8a6d2 |
|
BLAKE2b-256 | ba645bbcaa3b91eefd2d0af30693b720027e28fcdd8404fc175c1b595ff0c0cc |