Skip to main content

sentence segmentation and word tokenization tools

Project description

The segtok package provides two modules, segtok.segmenter and segtok.tokenizer. The segmenter provides functionality for splitting (Indo-European) text into sentences. The tokenizer provides functionality for splitting (Indo-European) sentences into words and symbols (collectively called tokens). Both modules can also be used from the command-line. While other Indo-European languages could work, it has only been designed with languages such as Spanish, English, and German in mind. For a more informed introduction to this tool, please read the article on my blog.

Install

To install this package, you need to have the latest official version of Python installed. The easiest way to get it installed is using pip or any other package manager that works with PyPI:

pip install segtok

Then try the command line tools on some plain-text files (e.g., this README) to see if segtok meets your needs:

segmenter README.rst | tokenizer

Usage

For details, please refer to the respective documentation; This README only provides an overview of the provided functionality.

A command-line

After installing the package, two command-line tools will be available, segmenter and tokenizer. Each can take UTF-8 encoded plain-text and transforms it into newline-separated sentences or tokens, respectively. The tokenizer assumes that each line contains (at most) one single sentence, which is the output format of the segmenter. To learn more about each tool, please invoke them with their help option (-h or --help).

B segtok.segmenter

This module provides several split_... functions to segment texts into lists of sentences. In addition, to_unix_linebreaks normalizes linebreaks (including the Unicode linebreak) to newline control characters (\\n). The function rewrite_line_separators can be used to move (rewrite) the newline separators in the input text so that they are placed at the sentence segmentation locations.

C segtok.tokenizer

This module provides several ..._tokenizer functions to tokenize input sentences into words and symbols. In addition, it provides convenience functionality for English texts: Two compiled patterns (IS_...) can be used to detect if a word token contains a possessive-s marker (“Frank’s”) or is an apostrophe-based contraction (“didn’t”). Tokens that match these patterns can then be split using the split_possessive_markers and split_contractions functions, respectively.

History

  • 1.2.1 the length of sentences inside brackets is now parametrized

  • 1.2.0 wrote blog “documentation” and added chemical formula sub/super-script functionality

  • 1.1.2 fixed Unicode list of valid sentence terminals (was missing U+2048)

  • 1.1.1 fixed PyPI setup (missing MANIFEST.in for README.rst and “packages” in setup.py)

  • 1.1.0 added possessive-s marker and apostrophe contraction splitting of tokens

  • 1.0.0 initial release

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

segtok-1.2.1.tar.gz (16.9 kB view details)

Uploaded Source

File details

Details for the file segtok-1.2.1.tar.gz.

File metadata

  • Download URL: segtok-1.2.1.tar.gz
  • Upload date:
  • Size: 16.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for segtok-1.2.1.tar.gz
Algorithm Hash digest
SHA256 d74e21a484338fdf241cc8b116c2d30031140101dab762284fc4f04422d81a0e
MD5 cc4035392a0d4ec298bc086b4483dbe6
BLAKE2b-256 ce1e3a3bd8beb721770a52e44e8c8783467d2ccc23b9d886a9d5ff0fe3d86f89

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page