Skip to main content

English word segmentation.

Project description

zh_segment is an Apache2 licensed module for English word segmentation, written in pure-Python, and based on a trillion-word corpus.

Based on code from the chapter “Natural Language Corpus Data” by Peter Norvig from the book “Beautiful Data” (Segaran and Hammerbacher, 2009).

Data files are derived from the Google Web Trillion Word Corpus, as described by Thorsten Brants and Alex Franz, and distributed by the Linguistic Data Consortium. This module contains only a subset of that data. The unigram data includes only the most common 333,000 words. Similarly, bigram data includes only the most common 250,000 phrases. Every word and phrase is lowercased with punctuation removed.

Features

  • Pure-Python

  • Fully documented

  • 100% Test Coverage

  • Includes unigram and bigram data

  • Command line interface for batch processing

  • Easy to hack (e.g. different scoring, new data, different language)

  • Developed on Python 2.7

  • Tested on CPython 2.6, 2.7, 3.2, 3.3, 3.4 and PyPy 2.5+, PyPy3 2.4+

https://github.com/wuhaifengdhu/zh_segment/blob/master/docs/_static/zh_segment.png?raw=true

Quickstart

Installing zh_segment is simple with pip:

$ pip install zh_segment

You can access documentation in the interpreter with Python’s built-in help function:

>>> import zh_segment
>>> help(zh_segment)

Tutorial

In your own Python programs, you’ll mostly want to use segment to divide a phrase into a list of its parts:

>>> from zh_segment import segment
>>> segment('1077501; 1296599; 5000; 5000; 4975; 36 months; 10.64%; 162.87; B; B2;;10+ years;RENT')
['1077501', '1296599', '5000', '5000', '4975', '36', 'months', '10.64%', '162.87', 'B', 'B', '2', '10+', 'years', 'RENT']

zh_segment also provides a command-line interface for batch processing. This interface accepts two arguments: in-file and out-file. Lines from in-file are iteratively segmented, joined by a space, and written to out-file. Input and output default to stdin and stdout respectively.

$ echo thisisatest | python -m zh_segment
this is a test

The maximum segmented word length is 24 characters. Neither the unigram nor bigram data contain words exceeding that length. The corpus also excludes punctuation and all letters have been lowercased. Before segmenting text, clean is called to transform the input to a canonical form:

>>> from zh_segment import clean
>>> clean('She said, "Python rocks!"')
'shesaidpythonrocks'
>>> segment('She said, "Python rocks!"')
['she', 'said', 'python', 'rocks']

Sometimes its interesting to explore the unigram and bigram counts themselves. These are stored in Python dictionaries mapping word to count.

>>> import zh_segment as ws
>>> ws.load()
>>> ws.UNIGRAMS['the']
23135851162.0
>>> ws.UNIGRAMS['gray']
21424658.0
>>> ws.UNIGRAMS['grey']
18276942.0

Above we see that the spelling gray is more common than the spelling grey.

Bigrams are joined by a space:

>>> import heapq
>>> from pprint import pprint
>>> from operator import itemgetter
>>> pprint(heapq.nlargest(10, ws.BIGRAMS.items(), itemgetter(1)))
[('of the', 2766332391.0),
 ('in the', 1628795324.0),
 ('to the', 1139248999.0),
 ('on the', 800328815.0),
 ('for the', 692874802.0),
 ('and the', 629726893.0),
 ('to be', 505148997.0),
 ('is a', 476718990.0),
 ('with the', 461331348.0),
 ('from the', 428303219.0)]

Some bigrams begin with <s>. This is to indicate the start of a bigram:

>>> ws.BIGRAMS['<s> where']
15419048.0
>>> ws.BIGRAMS['<s> what']
11779290.0

The unigrams and bigrams data is stored in the zh_segment_data directory in the unigrams.txt and bigrams.txt files respectively.

Reference and Indices

zh_segment License

Copyright 2017 Z&H

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

zh_segment-1.2.1.tar.gz (4.4 MB view details)

Uploaded Source

Built Distribution

zh_segment-1.2.1-py2.py3-none-any.whl (4.4 MB view details)

Uploaded Python 2 Python 3

File details

Details for the file zh_segment-1.2.1.tar.gz.

File metadata

  • Download URL: zh_segment-1.2.1.tar.gz
  • Upload date:
  • Size: 4.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for zh_segment-1.2.1.tar.gz
Algorithm Hash digest
SHA256 5de5e1f298e04d2f41b90d0c1549a5bb2870c335450a8d70d929c29355391362
MD5 a327fb62fa31bd77b6fd699cf1286b31
BLAKE2b-256 532379ce6d838cb703329175c381b1f200c0def5a5613f014173a4de7c395aee

See more details on using hashes here.

File details

Details for the file zh_segment-1.2.1-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for zh_segment-1.2.1-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 9983bbf15e52fc65ba140404526307572d0d01f0a94474d4b3265afa2e9230d0
MD5 679357604843cb8dd741490a67e7f110
BLAKE2b-256 7a0b891be8b628a954105a7329c2253696a41a68fec2145639cfedac4c87c62b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page