Natural language tokenizer for documents in Python
Project description
# nltokeniz.py
[![PyPI version](https://badge.fury.io/py/nltokeniz.svg)](https://badge.fury.io/py/nltokeniz) [![Python versions](https://img.shields.io/pypi/pyversions/nltokeniz.svg)](setup.py) [![Build Status](https://travis-ci.org/raviqqe/nltokeniz.py.svg?branch=master)](https://travis-ci.org/raviqqe/nltokeniz.py) [![License](https://img.shields.io/badge/license-unlicense-lightgray.svg)](https://unlicense.org)
Natural language tokenizer for English and Japanese documents in Python
## License
[The Unlicense](https://unlicense.org)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
nltokeniz-0.0.5.tar.gz
(3.1 kB
view hashes)
Built Distributions
nltokeniz-0.0.5-py3.6.egg
(5.1 kB
view hashes)
Close
Hashes for nltokeniz-0.0.5-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 979d01186e704216be4837fe7ea43ee4d3c5fef36d4ac527e039f1e5529a6b57 |
|
MD5 | 9922294ac10102ef073264fa5a902270 |
|
BLAKE2b-256 | 8ff52e0d5c1c629adfc92af6472bfc7e49281c3ee98038465a1261d2b6e80fe8 |