Skip to main content

faster word segmentation tool, based on jieba_rs

Project description

FastJieba

fast_jieba for python, create by pyo3, use rust crate jieba_rs

import fast_jieba
from fast_jieba.analyse import extract_tags, textrank

words = fast_jieba.tokenize("小明就读北京清华大学物理系")
print(words)

tags = extract_tags("小明就读北京清华大学物理系")
print(tags)

tags = textrank("小明就读北京清华大学物理系")
print(tags)

print("**************")
texts = ["小明就读北京清华大学物理系" for _ in range(4)]

words = fast_jieba.batch_tokenize(texts)
print(words)

words = fast_jieba.batch_cut(texts)
print(words)

words = fast_jieba.batch_posseg(texts)
print(words)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fast_jieba-0.4.0.tar.gz (4.3 kB view hashes)

Uploaded source

Built Distributions

fast_jieba-0.4.0-cp310-none-win_amd64.whl (5.0 MB view hashes)

Uploaded cp310

fast_jieba-0.4.0-cp39-none-win_amd64.whl (5.0 MB view hashes)

Uploaded cp39

fast_jieba-0.4.0-cp38-none-win_amd64.whl (5.0 MB view hashes)

Uploaded cp38

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Huawei Huawei PSF Sponsor Microsoft Microsoft PSF Sponsor NVIDIA NVIDIA PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page