Natural language tokenizer for documents in Python
Project description
# nltokeniz.py
[![PyPI version](https://badge.fury.io/py/nltokeniz.svg)](https://badge.fury.io/py/nltokeniz) [![Python versions](https://img.shields.io/pypi/pyversions/nltokeniz.svg)](setup.py) [![Build Status](https://travis-ci.org/raviqqe/nltokeniz.py.svg?branch=master)](https://travis-ci.org/raviqqe/nltokeniz.py) [![License](https://img.shields.io/badge/license-unlicense-lightgray.svg)](https://unlicense.org)
Natural language tokenizer for English and Japanese documents in Python
## License
[The Unlicense](https://unlicense.org)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
nltokeniz-0.0.2.tar.gz
(3.1 kB
view hashes)
Built Distributions
nltokeniz-0.0.2-py3.6.egg
(4.9 kB
view hashes)
Close
Hashes for nltokeniz-0.0.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 923a850bf13acfa665132f9c2b4252604f7b3a10b267a5fe8df3b2739f2672c0 |
|
MD5 | 685005f756b6d1db8a597893bde25b39 |
|
BLAKE2b-256 | 924a026464f2c7635068e7cd05e5f80d5160518f84054bc18928e98f86533f61 |