Natural language tokenizer for documents in Python
Project description
# nltokeniz.py
[![PyPI version](https://badge.fury.io/py/nltokeniz.svg)](https://badge.fury.io/py/nltokeniz) [![Python versions](https://img.shields.io/pypi/pyversions/nltokeniz.svg)](setup.py) [![Build Status](https://travis-ci.org/raviqqe/nltokeniz.py.svg?branch=master)](https://travis-ci.org/raviqqe/nltokeniz.py) [![License](https://img.shields.io/badge/license-unlicense-lightgray.svg)](https://unlicense.org)
Natural language tokenizer for English and Japanese documents in Python
## License
[The Unlicense](https://unlicense.org)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
nltokeniz-0.0.4.tar.gz
(3.1 kB
view hashes)
Built Distributions
nltokeniz-0.0.4-py3.6.egg
(5.0 kB
view hashes)
Close
Hashes for nltokeniz-0.0.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 92f2dff9b03c924a8e9a2379842af53d28c07c739a5d668babf90852b783964b |
|
MD5 | c6e6567a744151c035c4f9a30294cc8a |
|
BLAKE2b-256 | b171a0578c3141c0e47696f3d334387810fdfbc06875567c8ca88acea7654259 |