Skip to main content

Tibetan Word Tokenizer

Project description


OpenPecha

Botok – Python Tibetan Tokenizer

GitHub release Documentation Status Build Status Coverage Status Code style: black

DescriptionInstallExampleCommented ExampleDocsOwnersAcknowledgementsMaintainanceLicense


Description

Botok tokenizes Tibetan text into words with optional attributes such as lemma, POS, clean form.

Install

Requires to have Python3 installed.

pip3 install botok

Example

from botok import WordTokenizer
from botok.config import Config
from pathlib import Path

def get_tokens(wt, text):
    tokens = wt.tokenize(text, split_affixes=False)
    return tokens

if __name__ == "__main__":
    config = Config(dialect_name="general", base_path= Path.home())
    wt = WordTokenizer(config=config)
    text = "བཀྲ་ཤིས་བདེ་ལེགས་ཞུས་རྒྱུ་ཡིན་ སེམས་པ་སྐྱིད་པོ་འདུག།"
    tokens = get_tokens(wt, text)
    for token in tokens:
        print(token)

https://user-images.githubusercontent.com/24893704/148767959-31cc0a69-4c83-4841-8a1d-028d376e4677.mp4

Commented Example

>>> from botok import Text

>>> # input is a multi-line input string
>>> in_str = """ལེ གས། བཀྲ་ཤིས་མཐའི་ ༆ ཤི་བཀྲ་ཤིས་  tr 
... བདེ་་ལེ གས། བཀྲ་ཤིས་བདེ་ལེགས་༡༢༣ཀཀ། 
... མཐའི་རྒྱ་མཚོར་གནས་པའི་ཉས་ཆུ་འཐུང་།། །།མཁའ།"""


### STEP1: instanciating Text

>>> # A. on a string
>>> t = Text(in_str)

>>> # B. on a file
... # note all following operations can be applied to files in this way.
>>> from pathlib import Path
>>> in_file = Path.cwd() / 'test.txt'

>>> # file content:
>>> in_file.read_text()
'བཀྲ་ཤིས་བདེ་ལེགས།།\n'

>>> t = Text(in_file)
>>> t.tokenize_chunks_plaintext

>>> # checking an output file has been written:
... # BOM is added by default so that notepad in Windows doesn't scramble the line breaks
>>> out_file = Path.cwd() / 'test_pybo.txt'
>>> out_file.read_text()
'\ufeffབཀྲ་ ཤིས་ བདེ་ ལེགས །།'

### STEP2: properties will perform actions on the input string:
### note: original spaces are replaced by underscores.

>>> # OUTPUT1: chunks are meaningful groups of chars from the input string.
... # see how punctuations, numerals, non-bo and syllables are all neatly grouped.
>>> t.tokenize_chunks_plaintext
'ལེ_གས །_ བཀྲ་ ཤིས་ མཐའི་ _༆_ ཤི་ བཀྲ་ ཤིས་__ tr_\n བདེ་་ ལེ_གས །_ བཀྲ་ ཤིས་ བདེ་ ལེགས་ ༡༢༣ ཀཀ །_\n མཐའི་ རྒྱ་ མཚོར་ གནས་ པའི་ ཉས་ ཆུ་ འཐུང་ །།_།། མཁའ །'

>>> # OUTPUT2: could as well be acheived by in_str.split(' ')
>>> t.tokenize_on_spaces
'ལེ གས། བཀྲ་ཤིས་མཐའི་ ༆ ཤི་བཀྲ་ཤིས་ tr བདེ་་ལེ གས། བཀྲ་ཤིས་བདེ་ལེགས་༡༢༣ཀཀ། མཐའི་རྒྱ་མཚོར་གནས་པའི་ཉས་ཆུ་འཐུང་།། །།མཁའ།'

>>> # OUTPUT3: segments in words.
... # see how བདེ་་ལེ_གས was still recognized as a single word, even with the space and the double tsek.
... # the affixed particles are separated from the hosting word: མཐ འི་ རྒྱ་མཚོ ར་ གནས་པ འི་ ཉ ས་
>>> t.tokenize_words_raw_text
Loading Trie... (2s.)
'ལེ_གས །_ བཀྲ་ཤིས་ མཐ འི་ _༆_ ཤི་ བཀྲ་ཤིས་_ tr_ བདེ་་ལེ_གས །_ བཀྲ་ཤིས་ བདེ་ལེགས་ ༡༢༣ ཀཀ །_ མཐ འི་ རྒྱ་མཚོ ར་ གནས་པ འི་ ཉ ས་ ཆུ་ འཐུང་ །།_།། མཁའ །'
>>> t.tokenize_words_raw_lines
'ལེ_གས །_ བཀྲ་ཤིས་ མཐ འི་ _༆_ ཤི་ བཀྲ་ཤིས་__ tr_\n བདེ་་ལེ_གས །_ བཀྲ་ཤིས་ བདེ་ལེགས་ ༡༢༣ ཀཀ །_\n མཐ འི་ རྒྱ་མཚོ ར་ གནས་པ འི་ ཉ ས་ ཆུ་ འཐུང་ །།_།། མཁའ །'

>>> # OUTPUT4: segments in words, then calculates the number of occurences of each word found
... # by default, it counts in_str's substrings in the output, which is why we have བདེ་་ལེ གས	1, བདེ་ལེགས་	1
... # this behaviour can easily be modified to take into account the words that pybo recognized instead (see advanced usage)
>>> print(t.list_word_types)
འི	3
 	2
བཀྲཤིས	2
མཐ	2
ལེ གས	1
  	1
ཤི	1
བཀྲཤིས  	1
tr \n	1
བདེ་་ལེ གས	1
བདེལེགས	1
༡༢༣	1
ཀཀ	1
 \n	1
རྒྱམཚོ	1
	1
གནས	1
	1
	1
ཆུ	1
འཐུང	1
།། །།	1
མཁའ	1
	1
Custom dialect pack:

In order to use custom dialect pack:

  • You need to prepare your dialect pack in same folder structure like general dialect pack
  • Then you need to instaintiate a config object where you will pass dialect name and path
  • You can instaintiate your tokenizer object using that config object
  • Your tokenizer will be using your custom dialect pack and it will be using trie pickled file in future to build the custom trie.

Docs

No documentations.

Owners

Acknowledgements

botok is an open source library for Tibetan NLP.

We are always open to cooperation in introducing new features, tool integrations and testing solutions.

Many thanks to the companies and organizations who have supported botok's development, especially:

Maintainance

Build the source dist:

rm -rf dist/
python3 setup.py clean sdist

and upload on twine (version >= 1.11.0) with:

twine upload dist/*

License

The Python code is Copyright (C) 2019 Esukhia, provided under Apache 2.

contributors:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

botok-0.8.12.tar.gz (68.6 kB view details)

Uploaded Source

Built Distribution

botok-0.8.12-py3-none-any.whl (79.7 kB view details)

Uploaded Python 3

File details

Details for the file botok-0.8.12.tar.gz.

File metadata

  • Download URL: botok-0.8.12.tar.gz
  • Upload date:
  • Size: 68.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.9.6 readme-renderer/37.3 requests/2.30.0 requests-toolbelt/1.0.0 urllib3/2.0.2 tqdm/4.65.0 importlib-metadata/6.6.0 keyring/23.13.1 rfc3986/2.0.0 colorama/0.4.6 CPython/3.10.11

File hashes

Hashes for botok-0.8.12.tar.gz
Algorithm Hash digest
SHA256 98bf755522725ead547d3de66af692c472570bf6ab2d335b19255bcb85213949
MD5 26f449844f49b490dfe87825316d3053
BLAKE2b-256 facd224fb390540ce02357e8d9274f9a55731b4b87c4850b8653884f7a24841a

See more details on using hashes here.

File details

Details for the file botok-0.8.12-py3-none-any.whl.

File metadata

  • Download URL: botok-0.8.12-py3-none-any.whl
  • Upload date:
  • Size: 79.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.9.6 readme-renderer/37.3 requests/2.30.0 requests-toolbelt/1.0.0 urllib3/2.0.2 tqdm/4.65.0 importlib-metadata/6.6.0 keyring/23.13.1 rfc3986/2.0.0 colorama/0.4.6 CPython/3.10.11

File hashes

Hashes for botok-0.8.12-py3-none-any.whl
Algorithm Hash digest
SHA256 ad7f7d8350f8c0a18430ab0a7465ac0f9e4aa566cb5acc2a0fb738dfa45c48f5
MD5 510f9ceb9728eb8a5b15e6f78ac9a3a3
BLAKE2b-256 859c93f7d5670628eb22c326d1b57377ddceb050f9ed6ee918059d2dccec057d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page