Skip to main content

Determine Unicode text segmentations

Project description

A Python package to determine Unicode text segmentations.

News

We released the version 0.9.0 on November, 2024, and this is the first release ever which passes all the Unicode breaking tests (congrats!). And now I’m going to make its release number to 1.0, with some breaking changes for the APIs soon. Thank you.

Features

This package provides:

  • Functions to get Unicode Character Database (UCD) properties concerned with text segmentations.

  • Functions to determine segmentation boundaries of Unicode strings.

  • Classes that help implement Unicode-aware text wrapping on both console (monospace) and graphical (monospace / proportional) font environments.

Supporting segmentations are:

code point

Code point is “any value in the Unicode codespace.” It is the basic unit for processing Unicode strings.

Historically, units per Unicode string object on elder versions of Python was build-dependent. Some builds uses UTF-16 as an implementation for that and treat each code point greater than U+FFFF as a “surrogate pair”, which is a pair of the special two code points. The uniseg package had provided utility functions in order to treat Unicode strings per proper code points on every platform.

Since Python 3.3, The Unicode string is implemented with “flexible string representation”, which gives access to full code points and space-efficiency [PEP 393]. So you don’t need to worry about treating complex multi-code-points issue any more. If you want to treat some Unicode string per code point, just iterate that like: for c in s:. So uniseg.codepoint module has been deprecated and deleted.

grapheme cluster

Grapheme cluster approximately represents “user-perceived character.” They may be made up of single or multiple Unicode code points. e.g. “G” + acute-accent is a user-perceived character.

word break

Word boundaries are familiar segmentation in many common text operations. e.g. Unit for text highlighting, cursor jumping etc. Note that words are not determinable only by spaces or punctuations in text in some languages. Such languages like Thai or Japanese require dictionaries to determine appropriate word boundaries. Though the package only provides simple word breaking implementation which is based on the scripts and doesn’t use any dictionaries, it also provides ways to customize its default behavior.

sentence break

Sentence breaks are also common in text processing but they are more contextual and less formal. The sentence breaking implementation (which is specified in UAX: Unicode Standard Annex) in the package is simple and formal too. But it must be still useful in some usages.

line break

Implementing line breaking algorithm is one of the key features of this package. The feature is important in many general text presentations in both CLI and GUI applications.

Requirements

Python 3.9 or later.

Install

pip install uniseg

Changes

0.9.0 (2024-11-07)

  • Unicode 16.0.0.

  • Rule-based grapheme cluster segmentation is back.

  • And, this is the first release ever that passes the entire Unicode breaking tests!

0.8.1 (2024-08-13)

  • Fix sentence_break(‘/’) raised an exception. (Thanks to Nathaniel Mills)

0.8.0 (2024-02-08)

  • Unicode 15.0.0.

  • Regex-based grapheme cluster segmentation.

  • Quit supporting Python versions < 3.8.

0.7.2 (2022-09-20)

0.7.1 (2015-05-02)

  • CHANGE: wrap.Wrapper.wrap(): returns the count of lines now.

  • Separate LICENSE from README.txt for the packaging-related reason in some environments.

0.7.0 (2015-02-27)

  • CHANGE: Quitted gathering all submodules’s members on the top, uniseg module.

  • CHANGE: Reform uniseg.wrap module and sample scripts.

  • Maintained uniseg.wrap module, and sample scripts work again.

0.6.4 (2015-02-10)

  • Add uniseg-dbpath console command, which just print the path of ucd.sqlite3.

  • Include sample scripts under the package’s subdirectory.

0.6.3 (2015-01-25)

  • Python 3.4

  • Support modern setuptools, pip and wheel.

0.6.2 (2013-06-09)

  • Python 3.3

0.6.1 (2013-06-08)

  • Unicode 6.2.0

References

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

uniseg-0.9.0.tar.gz (8.2 MB view details)

Uploaded Source

Built Distribution

uniseg-0.9.0-py3-none-any.whl (8.2 MB view details)

Uploaded Python 3

File details

Details for the file uniseg-0.9.0.tar.gz.

File metadata

  • Download URL: uniseg-0.9.0.tar.gz
  • Upload date:
  • Size: 8.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for uniseg-0.9.0.tar.gz
Algorithm Hash digest
SHA256 fbecee9dfb2e4d8e5fb307f780c87478c00cf3b9023c25772ab6f9f156567684
MD5 a980180a051d116e4f5b5057efaefc6e
BLAKE2b-256 142fedd8cd6347e4172f51445ccf69c0c494cccb69b2aa3c3b91560b0fdfc6c7

See more details on using hashes here.

File details

Details for the file uniseg-0.9.0-py3-none-any.whl.

File metadata

  • Download URL: uniseg-0.9.0-py3-none-any.whl
  • Upload date:
  • Size: 8.2 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for uniseg-0.9.0-py3-none-any.whl
Algorithm Hash digest
SHA256 915e6ce440f7b9b894da7fc8d63484251794fd7ee69fe568a89623ec23404233
MD5 dbd38cc6c897e1bda752013e4892fda0
BLAKE2b-256 380cc01958d3725d87426934266253d1591ea59b62ab03e65c6b97e924571c3d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page