A simple iterator for using a set of Chinese tokenizer
Project description
中文分词器集合
一些中文分词器的简单封装和集合
Free software: MIT license
Documentation: https://chinese-tokenzier-iterator.readthedocs.io.
Features
TODO
使用
from tokenizers_collection.config import tokenizer_registry
for name, tokenizer in tokenizer_registry:
print("Tokenizer: {}".format(name))
tokenizer('input_file.txt', 'output_file.txt')
安装
pip install tokenizers_collection
更新许可文件与下载模型
因为其中有些模型需要更新许可文件()或者需要下载模型文件(),因此安装后需要执行特定的命令完成操作,这里已经将所有的操作封装成了一个函数,只需要执行类似如下的指令即可
python -m tokenizers_collection.helper
Credits
This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.
History
0.1.0 (2018-08-28)
First release on PyPI.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for tokenizers_collection-0.1.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4064b30f879b9e2648e8328667dd67315e095a3dc459988c086d93c6ad30d7d2 |
|
MD5 | 815ed18733ba0e273ebfe116510c6642 |
|
BLAKE2b-256 | c6679d1bcf7e48c2abfe9f458bd3a79eb86a0029b9b4f9e13dafe5d4c51fee8d |
Close
Hashes for tokenizers_collection-0.1.0-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5635078fa7464de8e8f2422d7e2d5c04b5f221f8c46d183a73ccf58c5a65bf88 |
|
MD5 | 5b052a1c97d9cb1cdabb1302afda23bd |
|
BLAKE2b-256 | 582ebe637297cae232cb44ebeb1d6916586d563d7b0120c27cd2f597d6b7060d |