A simple iterator for using a set of Chinese tokenizer
Project description
中文分词器集合
一些中文分词器的简单封装和集合
Free software: MIT license
Documentation: https://chinese-tokenzier-iterator.readthedocs.io.
Features
TODO
使用
from tokenizers_collection.config import tokenizer_registry
for name, tokenizer in tokenizer_registry:
print("Tokenizer: {}".format(name))
tokenizer('input_file.txt', 'output_file.txt')
安装
pip install tokenizers_collection
更新许可文件与下载模型
因为其中有些模型需要更新许可文件(比如:pynlpir)或者需要下载模型文件(比如:pyltp),因此安装后需要执行特定的命令完成操作,这里已经将所有的操作封装成了一个函数,只需要执行类似如下的指令即可
python -m tokenizers_collection.helper
Credits
This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.
History
0.1.0 (2018-08-28)
First release on PyPI.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for tokenizers_collection-0.1.1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | b5fed4237c62691f7b1b378d11b20b948904418adc1612df6cecb78971854f38 |
|
MD5 | 00cdfdfbe8714af968d635029211a15e |
|
BLAKE2b-256 | 76df08f9fb8fe1768ba0c6bbfc75e1859c7807de413859905c4ab8f8a24717b9 |
Close
Hashes for tokenizers_collection-0.1.1-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2e2ef7e7ad287275c56feef66fddd934c097f82a3ab0720c375a97389cb63c88 |
|
MD5 | f19dfd8c152d88955aac2e14fd784044 |
|
BLAKE2b-256 | 60e5f0c957658cac4b0562feaea5ebdb905d297a45a91ada0dcb0fd5cc9494a3 |