Skip to main content

Taiwanese Hokkien Transliterator and Tokeniser

Project description

台語 | 國語


Logo

Taibun

Contributions Live Demo Tests Release Licence LinkedIn Downloads

Taiwanese Hokkien Transliterator and Tokeniser

It has methods that allow to customise transliteration and retrieve any necessary information about Taiwanese Hokkien pronunciation.
Includes word tokeniser for Taiwanese Hokkien.

Report BugPyPI


Table of Contents
  1. Versions
  2. Install
  3. Usage
  4. Example
  5. Data
  6. Acknowledgements
  7. Licence

Versions

JavaScript Version

Install

Taibun can be installed from pypi

$ pip install taibun

Usage

Converter

Converter class transliterates the Chinese characters to the chosen transliteration system with parameters specified by the developer. Works for both Traditional and Simplified characters.

# Constructor
c = Converter(system, dialect, format, delimiter, sandhi, punctuation, convert_non_cjk)

# Transliterate Chinese characters
c.get(input)

System

system String - system of transliteration.

text Tailo POJ Zhuyin TLPA Pingyim Tongiong IPA
台灣 Tâi-uân Tâi-oân ㄉㄞˊ ㄨㄢˊ Tai5 uan5 Dáiwán Tāi-uǎn Tai²⁵ uan²⁵

Dialect

dialect String - preferred pronunciation.

text south north singapore
五月節我啉咖啡 Gōo-gue̍h-tseh guá lim ka-pi Gōo-ge̍h-tsueh guá lim ka-pi Gōo-ge̍h-tsueh uá lim ko-pi

Format

format String - format in which tones will be represented in the converted sentence.

  • mark (default) - uses diacritics for each syllable. Not available for TLPA
  • number - add a number which represents the tone at the end of the syllable
  • strip - removes any tone marking
text mark number strip
台灣 Tâi-uân Tai5-uan5 Tai-uan

Delimiter

delimiter String - sets the delimiter character that will be placed in between syllables of a word.

Default value depends on the chosen system:

  • '-' - for Tailo, POJ, Tongiong
  • '' - for Pingyim
  • ' ' - for Zhuyin, TLPA, IPA
text '-' '' ' '
台灣 Tâi-uân Tâiuân Tâi uân

Sandhi

sandhi String - applies the sandhi rules of Taiwanese Hokkien.

Since it's difficult to encode all sandhi rules, Taibun provides multiple modes for sandhi conversion to allow for customised sandhi handling.

  • none - doesn't perform any tone sandhi
  • auto - closest approximation to full correct tone sandhi of Taiwanese, with proper sandhi of pronouns, suffixes, and words with 仔
  • exc_last - changes tone for every syllable except for the last one
  • incl_last - changes tone for every syllable including the last one

Default value depends on the chosen system:

  • auto - for Tongiong
  • none - for Tailo, POJ, Zhuyin, TLPA, Pingyim, IPA
text none auto exc_last incl_last
這是你的茶桌仔無 Tse sī lí ê tê-toh-á bô Tse sì li ē tē-to-á bô Tsē sì li ē tē-tó-a bô Tsē sì li ē tē-tó-a bō

Sandhi rules also change depending on the dialect chosen.

text no sandhi south north / singapore
台灣 Tâi-uân Tāi-uân Tài-uân

Punctuation

punctuation String

  • format (default) - converts Chinese-style punctuation to Latin-style punctuation and capitalises words at the beginning of each sentence
  • none - preserves Chinese-style punctuation and doesn't capitalise words at the beginning of new sentences
text format none
這是臺南,簡稱「南」(白話字:Tâi-lâm;注音符號:ㄊㄞˊ ㄋㄢˊ,國語:Táinán)。 Tse sī Tâi-lâm, kán-tshing "lâm" (Pe̍h-uē-jī: Tâi-lâm; tsù-im hû-hō: ㄊㄞˊ ㄋㄢˊ, kok-gí: Táinán). tse sī Tâi-lâm,kán-tshing「lâm」(Pe̍h-uē-jī:Tâi-lâm;tsù-im hû-hō:ㄊㄞˊ ㄋㄢˊ,kok-gí:Táinán)。

Convert non-CJK

convert_non_cjk Boolean - defines whether or not to convert non-Chinese words. Can be used to convert Tailo to another romanisation system.

  • True - convert non-Chinese character words
  • False (default) - convert only Chinese character words
text False True
我食pháng ㆣㄨㄚˋ ㄐㄧㄚㆷ˙ pháng ㆣㄨㄚˋ ㄐㄧㄚㆷ˙ ㄆㄤˋ

Tokeniser

Tokeniser class performs NLTK wordpunct_tokenize-like tokenisation of a Taiwanese Hokkien sentence.

# Constructor
t = Tokeniser(keep_original)

# Tokenise Taiwanese Hokkien sentence
t.tokenise(input)

Keep original

keep_original Boolean - defines whether the original characters of the input are retained.

  • True (default) - preserve original characters
  • False - replace original characters with characters defined in the dataset
text True False
臺灣火鸡肉饭 ['臺灣', '火鸡肉饭'] ['台灣', '火雞肉飯']

Other Functions

Handy functions for NLP tasks in Taiwanese Hokkien.

to_traditional function converts input to Traditional Chinese characters that are used in the dataset. Also accounts for different variants of Traditional Chinese characters.

to_simplified function converts input to Simplified Chinese characters.

is_cjk function checks whether the input string consists entirely of Chinese characters.

to_traditional(input)

to_simplified(input)

is_cjk(input)

Example

# Converter
from taibun import Converter

## System
c = Converter() # Tailo system default
c.get('先生講,學生恬恬聽。')
>> Sian-sinn kóng, ha̍k-sing tiām-tiām thiann.

c = Converter(system='Zhuyin')
c.get('先生講,學生恬恬聽。')
>> ㄒㄧㄢ ㄒㆪ ㄍㆲˋ, ㄏㄚㆶ˙ ㄒㄧㄥ ㄉㄧㆰ˫ ㄉㄧㆰ˫ ㄊㄧㆩ.

## Dialect
c = Converter() # south dialect default
c.get("我欲用箸食魚")
>> Guá beh īng  tsia̍h 

c = Converter(dialect='north')
c.get("我欲用箸食魚")
>> Guá bueh īng  tsia̍h 

c = new Converter({ dialect: 'singapore' });
c.get("我欲用箸食魚");
>>  bueh ēng  tsia̍h 

## Format
c = Converter() # for Tailo, mark by default
c.get("生日快樂")
>> Senn-ji̍t khuài-lo̍k

c = Converter(format='number')
c.get("生日快樂")
>> Senn1-jit8 khuai3-lok8

c = Converter(format='strip')
c.get("生日快樂")
>> Senn-jit khuai-lok

## Delimiter
c = Converter(delimiter='')
c.get("先生講,學生恬恬聽。")
>> Siansinn kóng, ha̍ksing tiāmtiām thiann.

c = Converter(system='Pingyim', delimiter='-')
c.get("先生講,學生恬恬聽。")
>> Siān-snī gǒng, hág-sīng diâm-diâm tinā.

## Sandhi
c = Converter() # for Tailo, sandhi none by default
c.get("這是你的茶桌仔無")
>> Tse   ê -toh-á 

c = Converter(sandhi='auto')
c.get("這是你的茶桌仔無")
>> Tse  li ē -to-á 

c = Converter(sandhi='exc_last')
c.get("這是你的茶桌仔無")
>> Tsē  li ē --a 

c = Converter(sandhi='incl_last')
c.get("這是你的茶桌仔無")
>> Tsē  li ē --a 

## Punctuation
c = Converter() # format punctuation default
c.get("太空朋友,恁好!恁食飽未?")
>> Thài-khong pîng-, lín-! Lín tsia̍h- buē?

c = Converter(punctuation='none')
c.get("太空朋友,恁好!恁食飽未?")
>> thài-khong pîng-lín-lín tsia̍h- buē

## Convert non-CJK
c = Converter(system='Zhuyin') # False convert_non_cjk default
c.get("我食pháng")
>> ㆣㄨㄚˋ ㄐㄧㄚㆷ˙ pháng

c = Converter(system='Zhuyin', convert_non_cjk=True)
c.get("我食pháng")
>> ㆣㄨㄚˋ ㄐㄧㄚㆷ˙ ㄆㄤˋ


# Tokeniser
from taibun import Tokeniser

t = Tokeniser()
t.tokenise("太空朋友,恁好!恁食飽未?")
>> ['太空', '朋友', ',', '恁好', '!', '恁', '食飽', '未', '?']

## Keep Original
t = Tokeniser() # True keep_original default
t.tokenise("爲啥物臺灣遮爾好?")
>> ['爲啥物', '臺灣', '遮爾', '好', '?']

t.tokenise("为啥物台湾遮尔好?")
>> ['为啥物', '台湾', '遮尔', '好', '?']

t = Tokeniser(False)
t.tokenise("爲啥物臺灣遮爾好?")
>> ['為啥物', '台灣', '遮爾', '好', '?']

t.tokenise("为啥物台湾遮尔好?")
>> ['為啥物', '台灣', '遮爾', '好', '?']


# Other Functions
from taibun import to_traditional, to_simplified, is_cjk

## to_traditional
to_traditional("我听无台语")
>> 我聽無台語

to_traditional("我爱这个个人台面")
>> 我愛這个個人檯面

to_traditional("爲啥物")
>> 為啥物

## to_simplified
to_simplified("我聽無台語")
>> 我听无台语

## is_cjk
is_cjk('我食麭')
>> True

is_cjk('我食pháng')
>> False

Data

Acknowledgements

  • Samuel Jen (Github · LinkedIn) - Taiwanese and Mandarin translation

Licence

Because Taibun is MIT-licensed, any developer can essentially do whatever they want with it as long as they include the original copyright and licence notice in any copies of the source code. Note, that the data used by the package is licensed under a different copyright.

The data is licensed under CC BY-SA 4.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

taibun-1.1.7.tar.gz (611.6 kB view details)

Uploaded Source

Built Distribution

taibun-1.1.7-py3-none-any.whl (563.1 kB view details)

Uploaded Python 3

File details

Details for the file taibun-1.1.7.tar.gz.

File metadata

  • Download URL: taibun-1.1.7.tar.gz
  • Upload date:
  • Size: 611.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.1

File hashes

Hashes for taibun-1.1.7.tar.gz
Algorithm Hash digest
SHA256 136aa60861c1f50ab286a78445ef40eaed7460098daabcf10026074c6ea6e870
MD5 4a7aad5266ebe374980e86e6711e9937
BLAKE2b-256 9352e313c931ee7226b5f20dd4577be470808f7e4415b95414246e5167fb6a05

See more details on using hashes here.

File details

Details for the file taibun-1.1.7-py3-none-any.whl.

File metadata

  • Download URL: taibun-1.1.7-py3-none-any.whl
  • Upload date:
  • Size: 563.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.1

File hashes

Hashes for taibun-1.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 5b38335ea9550c3976d728c9d5e29d63eaa7326429d30eb737c1f681c2ab6108
MD5 e0f68e5cbf92d74e264c143da84ac1a6
BLAKE2b-256 7ab57164a1218cc9ff600e668f1fc8eedd3dba4287c0939a9e629d6edb82aa2d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page