Skip to main content

Classes and functions for working with Unicode data.

Project description

https://img.shields.io/pypi/v/unicodeutil.svg https://img.shields.io/github/workflow/status/leonidessaguisagjr/unicodeutil/Python%20unicodeutil

Python classes and functions for working with Unicode® data. This was initially built with Python 2 in mind but has also been tested with Python 3, PyPy and PyPy3.

Dependencies

This package has the following external dependencies:

  • six - for Python 2 to 3 compatibility

Case folding function

casefold(s) is a function for performing case folding per section 3.13 of the Unicode® Standard. Also see the W3C page on case folding for more information on what case folding is.

Python 3.3 and newer has str.casefold() already built in. This is my attempt at building a case folding function to use with Python 2 and as such was initially only tested with Python 2.7.14. It essentially parses the CaseFolding.txt file that is included in the Unicode® Character Database to build a dictionary that is then used as a lookup table to create a copy of the input string that has been transformed to facilitate caseless comparisons.

A bit more information about how I put this together on my blog.

By default, the casefold(s) function performs full case folding. To use simple case folding, pass the parameter fullcasefold=False (the default is fullcasefold=True). See the comments in CaseFolding.txt for an explanation of the difference between simple and full case folding.

By default, the casefold(s) function will not use the Turkic special case mappings for dotted and dotless ‘i’. To use the Turkic mapping, pass the parameter useturkicmapping=True to the function. See the following web pages for more information on the dotted vs dotless ‘i’:

Example usage

Using Python 2:

>>> from unicodeutil import casefold
>>> s1 = u"weiß"
>>> s2 = u"WEISS"
>>> casefold(s1) == casefold(s2)
True
>>> s1 = u"LİMANI"
>>> s2 = u"limanı"
>>> casefold(s1) == casefold(s2)
False
>>> casefold(s1, useturkicmapping=True) == casefold(s2, useturkicmapping=True)
True

Splitting a Python 2 string into chars, preserving surrogate pairs

The preservesurrogates(s) function will split a string into a list of characters, preserving surrogate pairs.

Example usage

Using Python 2:

>>> from unicodeutil import preservesurrogates
>>> s = u"ABC\U0001e900DeF\U000118a0gHıİ"
>>> list(s)
[u'A', u'B', u'C', u'\ud83a', u'\udd00', u'D', u'e', u'F', u'\ud806', u'\udca0', u'g', u'H', u'\u0131', u'\u0130']
>>> for c in s:
...     print c
...
A
B
C
???
???
D
e
F
???
???
g
H
ı
İ
>>> list(preservesurrogates(s))
[u'A', u'B', u'C', u'\U0001e900', u'D', u'e', u'F', u'\U000118a0', u'g', u'H', u'\u0131', u'\u0130']
>>> for c in preservesurrogates(s):
...     print(c)
...
A
B
C
𞤀
D
e
F
𑢠
g
H
ı
İ

Using the latest Unicode® Character Database (UCD)

For the Python 2.7.x line, the unicodedata module in Python 2.7.17 is still using data from version 5.2.0 of the UCD. Python 3 releases prior to the 3.9.x line are also still not on the latest version of the UCD e.g. the unicodedata module in Python 3.8.10 is still using data from version 12.1.0 of the UCD. The UCD is currently up to version 13.0.0.

The UnicodeCharacter namedtuple encapsulates the various properties associated with each Unicode® character, as explained in Unicode Standard Annex #44, UnicodeData.txt.

The UnicodeData class represents the contents of the UCD as parsed from the latest UnicodeData.txt found on the Unicode Consortium FTP site. Once an instance of the UnicodeData class has been created, it is possible to do dict style lookups using the Unicode scalar value, lookup by Unicode character by using the lookup_by_char(c) method, or lookups by name using the lookup_by_name(name) and lookup_by_partial_name(partial_name) methods. The name lookup uses the UAX44-LM2 loose matching rule when doing lookups. Iterating through all of the data is also possible via items(), keys() and values() methods.

The UnicodeBlocks class encapsulates the block information associated with a Unicode character. Once an instance of the UnicodeBlocks class has been created, it is possible to get the Block name associated with a particular Unicode character by either doing dict style lookups using the Unicode scalar value, or using the lookup_by_char(c) method to lookup by Unicode character. Iterating through all of the data is also possible via the items(), keys() and values() methods.

Example usage

Using Python 2:

>>> from unicodeutil import UnicodeBlocks, UnicodeData
>>> ucd = UnicodeData()
>>> ucd[0x00df]
UnicodeCharacter(code=u'U+00DF', name='LATIN SMALL LETTER SHARP S', category='Ll', combining=0, bidi='L', decomposition='', decimal='', digit='', numeric='', mirrored='N', unicode_1_name='', iso_comment='', uppercase='', lowercase='', titlecase='')
>>> ucd[0x0130].name
'LATIN CAPITAL LETTER I WITH DOT ABOVE'
>>> ucd.lookup_by_char(u"ᜊ")
UnicodeCharacter(code=u'U+170A', name=u'TAGALOG LETTER BA', category=u'Lo', combining=0, bidi=u'L', decomposition=u'', decimal=u'', digit=u'', numeric=u'', mirrored=u'N', unicode_1_name=u'', iso_comment=u'', uppercase=u'', lowercase=u'', titlecase=u'')
>>> ucd.lookup_by_name("latin small letter sharp_s")
UnicodeCharacter(code=u'U+00DF', name='LATIN SMALL LETTER SHARP S', category='Ll', combining=0, bidi='L', decomposition='', decimal='', digit='', numeric='', mirrored='N', unicode_1_name='', iso_comment='', uppercase='', lowercase='', titlecase='')
>>> blocks = UnicodeBlocks()
>>> blocks[0x00DF]
u'Latin-1 Supplement'
>>> blocks.lookup_by_char(u"ẞ")
u'Latin Extended Additional'

Composing and decomposing Hangul Syllables

The function compose_hangul_syllable(jamo) takes a tuple or list of Unicode scalar values of Jamo and returns its equivalent precomposed Hangul syllable. The complementary function decompose_hangul_syllable(hangul_syllable, fully_decompose=False) takes the Unicode scalar value of a hangul syllable and will either do a canonical decomposition (default, fully_decompose=False) or a full canonical decomposition (fully_decompose=True) of a Hangul syllable. The return value will be a tuple of Unicode scalar values corresponding to the Jamo that the Hangul syllable has been decomposed into. For example (taken from the Unicode Standard, ch. 03, section 3.12, Conjoing Jamo Behavior):

U+D4DB <-> <U+D4CC, U+11B6>  # Canonical Decomposition (default)
U+D4CC <-> <U+1111, U+1171>
U+D4DB <-> <U+1111, U+1171, U+11B6>  # Full Canonical Decomposition

Example usage:

The following sample code snippet:

import sys

from unicodeutil import UnicodeData, compose_hangul_syllable, \
                        decompose_hangul_syllable

ucd = None


def pprint_composed(jamo):
    hangul = compose_hangul_syllable(jamo)
    hangul_data = ucd[hangul]
    print("<{0}> -> {1}".format(
        ", ".join([" ".join([jamo_data.code, jamo_data.name])
                   for jamo_data in [ucd[j] for j in jamo]]),
        " ".join([hangul_data.code, hangul_data.name])
    ))


def pprint_decomposed(hangul, decomposition):
    hangul_data = ucd[hangul]
    print("{0} -> <{1}>".format(
        " ".join([hangul_data.code, hangul_data.name]),
        ", ".join([" ".join([jamo_data.code, jamo_data.name])
                   for jamo_data in [ucd[jamo]
                                     for jamo in decomposition if jamo]])
    ))


def main():
    if len(sys.argv) not in {2, 3, 4}:
        print("Invalid number of arguments!")
        sys.exit(1)
    global ucd
    ucd = UnicodeData()
    if len(sys.argv) == 2:
        hangul = int(sys.argv[1], 16)
        print("Canonical Decomposition:")
        pprint_decomposed(hangul,
                          decompose_hangul_syllable(hangul,
                                                    fully_decompose=False))
        print("Full Canonical Decomposition:")
        pprint_decomposed(hangul,
                          decompose_hangul_syllable(hangul,
                                                    fully_decompose=True))
    elif len(sys.argv) in {3, 4}:
        print("Composition:")
        pprint_composed(tuple([int(arg, 16) for arg in sys.argv[1:]]))


if __name__ == "__main__":
    main()

Will produce the following (tested in Python 2 and Python 3):

$ python pprint_hangul.py 0xD4DB
Canonical Decomposition:
U+D4DB HANGUL SYLLABLE PWILH -> <U+D4CC HANGUL SYLLABLE PWI, U+11B6 HANGUL JONGSEONG RIEUL-HIEUH>
Full Canonical Decomposition:
U+D4DB HANGUL SYLLABLE PWILH -> <U+1111 HANGUL CHOSEONG PHIEUPH, U+1171 HANGUL JUNGSEONG WI, U+11B6 HANGUL JONGSEONG RIEUL-HIEUH>
$ python3 pprint_hangul.py 0xD4CC 0x11B6
Composition:
<U+D4CC HANGUL SYLLABLE PWI, U+11B6 HANGUL JONGSEONG RIEUL-HIEUH> -> U+D4DB HANGUL SYLLABLE PWILH
$ pypy pprint_hangul.py 0x1111 0x1171 0x11b6
Composition:
<U+1111 HANGUL CHOSEONG PHIEUPH, U+1171 HANGUL JUNGSEONG WI, U+11B6 HANGUL JONGSEONG RIEUL-HIEUH> -> U+D4DB HANGUL SYLLABLE PWILH

License

This is released under an MIT license. See the LICENSE file in this repository for more information.

The included Blocks.txt, CaseFolding.txt, HangulSyllableType.txt, Jamo.txt and UnicodeData.txt files are part of the Unicode® Character Database that is published by Unicode, Inc. Please consult the Unicode® Terms of Use prior to use.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

unicodeutil-14.0a1.tar.gz (318.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

unicodeutil-14.0a1-py3-none-any.whl (328.1 kB view details)

Uploaded Python 3

File details

Details for the file unicodeutil-14.0a1.tar.gz.

File metadata

  • Download URL: unicodeutil-14.0a1.tar.gz
  • Upload date:
  • Size: 318.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.9.12

File hashes

Hashes for unicodeutil-14.0a1.tar.gz
Algorithm Hash digest
SHA256 ee8ba635f9eb179dcde9a46461f0574e24eac66f36ffac1fcb6dbe5ac291e3a9
MD5 e9fd07cb73f3634a1e3625664e81b10e
BLAKE2b-256 bb5566067085685a153258ec5bde9097d997b33b82bbeef0637b76a0bb944722

See more details on using hashes here.

File details

Details for the file unicodeutil-14.0a1-py3-none-any.whl.

File metadata

  • Download URL: unicodeutil-14.0a1-py3-none-any.whl
  • Upload date:
  • Size: 328.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.9.12

File hashes

Hashes for unicodeutil-14.0a1-py3-none-any.whl
Algorithm Hash digest
SHA256 17fc06bb8a56cfba1314bff80212851850211c4dbf00580bbcf1e241f06f4839
MD5 ba2122f1a3902a717c1b41392bddfedb
BLAKE2b-256 b76cc943157db9f3d8e1c58e0da1d5c89fba0e0c5ef4a63c7763fc888740d16f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page