General Chinese Word Segmenter provided by Nanjing University NLP Group
Project description
1 NJU Chinese Word Segmenter
1.1 Description
This package contains the Chinese word segmenter and POS tagger released by the natural language processing group of Nanjing University.
1.2 Required Dependency
Python 3.6
Numpy
dyNET >= 2.0
1.3 Usage
1.3.1 Quick Start
Below is a quick snippet of code that demonstrates how to use the API this package provides.
from njusegtag import segmenter
# load a pretrained segmentation model.
segmenter.load('path/to/model')
# segment a Chinese sentence, note that the sentence should be encoded by utf-8.
segmenter.seg(unicode('上海浦东开发与法制建设同步。','utf-8'))
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.