English word segmentation.
Project description
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Description: Python Word Segmentation
========================
`WordSegment`_ is an Apache2 licensed module for English word
segmentation, written in pure-Python, and based on a trillion-word corpus.
Based on code from the chapter "`Natural Language Corpus Data`_" by Peter Norvig
from the book "`Beautiful Data`_" (Segaran and Hammerbacher, 2009).
Data files are derived from the `Google Web Trillion Word Corpus`_, as described
by Thorsten Brants and Alex Franz, and `distributed`_ by the Linguistic Data
Consortium. This module contains only a subset of that data. The unigram data
includes only the most common 333,000 words. Similarly, bigram data includes
only the most common 250,000 phrases. Every word and phrase is lowercased with
punctuation removed.
.. _`WordSegment`: http://www.grantjenks.com/docs/wordsegment/
.. _`Natural Language Corpus Data`: http://norvig.com/ngrams/
.. _`Beautiful Data`: http://oreilly.com/catalog/9780596157111/
.. _`Google Web Trillion Word Corpus`: http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html
.. _`distributed`: https://catalog.ldc.upenn.edu/LDC2006T13
Features
--------
- Pure-Python
- Fully documented
- 100% Test Coverage
- Includes unigram and bigram data
- Command line interface for batch processing
- Easy to hack (e.g. different scoring, new data, different language)
- Developed on Python 2.7
- Tested on CPython 2.6, 2.7, 3.2, 3.3, 3.4 and PyPy 2.5+, PyPy3 2.4+
Quickstart
----------
Installing WordSegment is simple with
`pip <http://www.pip-installer.org/>`_::
$ pip install wordsegmentation
Tutorial
--------
In your own Python programs, you'll mostly want to use `segment` to divide a
phrase into a list of its parts::
>>> from wordsegmentation import Wordsegment
>>> ws = WordSegment(use_google_corpus=True)
>>> ws.segment('universityofwashington')
['university', 'of', 'washington']
>>> ws.segment('thisisatest')
['this', 'is', 'a', 'test']
>>> segment('thisisatest')
['this', 'is', 'a', 'test']
WordSegment License
-------------------
Copyright 2015 Weihan Jiang
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Keywords: English word segmentation segment url domain word break
Platform: any
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Description: Python Word Segmentation
========================
`WordSegment`_ is an Apache2 licensed module for English word
segmentation, written in pure-Python, and based on a trillion-word corpus.
Based on code from the chapter "`Natural Language Corpus Data`_" by Peter Norvig
from the book "`Beautiful Data`_" (Segaran and Hammerbacher, 2009).
Data files are derived from the `Google Web Trillion Word Corpus`_, as described
by Thorsten Brants and Alex Franz, and `distributed`_ by the Linguistic Data
Consortium. This module contains only a subset of that data. The unigram data
includes only the most common 333,000 words. Similarly, bigram data includes
only the most common 250,000 phrases. Every word and phrase is lowercased with
punctuation removed.
.. _`WordSegment`: http://www.grantjenks.com/docs/wordsegment/
.. _`Natural Language Corpus Data`: http://norvig.com/ngrams/
.. _`Beautiful Data`: http://oreilly.com/catalog/9780596157111/
.. _`Google Web Trillion Word Corpus`: http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html
.. _`distributed`: https://catalog.ldc.upenn.edu/LDC2006T13
Features
--------
- Pure-Python
- Fully documented
- 100% Test Coverage
- Includes unigram and bigram data
- Command line interface for batch processing
- Easy to hack (e.g. different scoring, new data, different language)
- Developed on Python 2.7
- Tested on CPython 2.6, 2.7, 3.2, 3.3, 3.4 and PyPy 2.5+, PyPy3 2.4+
Quickstart
----------
Installing WordSegment is simple with
`pip <http://www.pip-installer.org/>`_::
$ pip install wordsegmentation
Tutorial
--------
In your own Python programs, you'll mostly want to use `segment` to divide a
phrase into a list of its parts::
>>> from wordsegmentation import Wordsegment
>>> ws = WordSegment(use_google_corpus=True)
>>> ws.segment('universityofwashington')
['university', 'of', 'washington']
>>> ws.segment('thisisatest')
['this', 'is', 'a', 'test']
>>> segment('thisisatest')
['this', 'is', 'a', 'test']
WordSegment License
-------------------
Copyright 2015 Weihan Jiang
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Keywords: English word segmentation segment url domain word break
Platform: any
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
wordsegmentation-0.3.3.tar.gz
(4.9 MB
view hashes)
Built Distribution
Close
Hashes for wordsegmentation-0.3.3-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 15aed3932525c1f1e227e186abb866226979cf7e8762aaab8a3c8a3f174651a9 |
|
MD5 | d836486e7c9ac416e6079987a9b187d0 |
|
BLAKE2b-256 | 9386d98802c0e369179d154a0239df49bf8804d013ff7e9e60127e6d8f8288ff |