Skip to main content

General Chinese Word Segmenter provided by Nanjing University NLP Group

Project description

Author: Saltychtao

1 NJU Chinese Word Segmenter

1.1 Description

This package contains the Chinese word segmenter released by the natural language processing group of Nanjing University.

1.2 Required Dependency

  • Python 3.6
  • Numpy
  • dyNET >= 2.0

1.3 Usage

1.3.1 Quick Start

Below is a quick snippet of code that demonstrates how to use the API this package provides.

from njuseg.model import Segmenter

 # load a pretrained segmentation model.
segmenter.load('path/to/model')

 # segment a Chinese sentence, note that the sentence should be encoded by utf-8.
segmenter.seg(“上海浦东开发与法制建设同步。”)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
njuseg-1.1-py2.py3-none-any.whl (7.8 kB) Copy SHA256 hash SHA256 Wheel py2.py3

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page