Skip to main content

General Chinese Word Segmenter provided by Nanjing University NLP Group

Project description

Author:

Saltychtao

1 NJU Chinese Word Segmenter

1.1 Description

This package contains the Chinese word segmenter released by the natural language processing group of Nanjing University.

1.2 Required Dependency

  • Python 3.6

  • Numpy

  • dyNET >= 2.0

1.3 Usage

1.3.1 Quick Start

Below is a quick snippet of code that demonstrates how to use the API this package provides.

from njuseg.model import Segmenter

 # load a pretrained segmentation model.
segmenter.load('path/to/model')

 # segment a Chinese sentence, note that the sentence should be encoded by utf-8.
segmenter.seg(上海浦东开发与法制建设同步。”)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

njuseg-1.1-py2.py3-none-any.whl (7.8 kB view hashes)

Uploaded Python 2 Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page