Skip to main content

Comprehensive tokenization library for Myanmar language

Project description

myTokenize

myTokenize is a Python library that tokenizes Myanmar text into syllables, words, phrases, and sentences. It supports multiple tokenization techniques using rule-based, statistical, and neural network-based approaches.

Features

  • Syllable Tokenization: Break text into syllables using regex rules.
  • BPE and Unigram Tokenization: Leverage SentencePiece models for tokenization.
  • Word Tokenization: Segment text into words using:
    • myWord: Dictionary-based tokenization.
    • CRF: Conditional Random Fields-based tokenization.
    • BiLSTM: Neural network-based tokenization.
  • Phrase Tokenization: Identify phrases in text using normalized pointwise mutual information (NPMI).
  • Sentence Tokenization: Use a BiLSTM model to segment text into sentences.

Installation

  1. Clone the repository:

    git clone https://github.com/ThuraAung1601/myTokenize.git
    cd myTokenize
    
  2. Install dependencies:

    pip install -r requirements.txt
    
  3. Install the library:

    pip install .
    

Usage

Syllable Tokenizer

from myTokenize import SyllableTokenizer

tokenizer = SyllableTokenizer()
syllables = tokenizer.tokenize("မြန်မာနိုင်ငံ။")
print(syllables)  # ['မြန်', 'မာ', 'နိုင်', 'ငံ', '။']

BPE Tokenizer

from myTokenize import BPETokenizer

tokenizer = BPETokenizer()
tokens = tokenizer.tokenize("ရွေးကောက်ပွဲမှာနိုင်ထားတဲ့ဒေါ်နယ်ထရမ့်")
print(tokens)  # ['▁ရွေးကောက်ပွဲ', 'မှာ', 'နိုင်', 'ထား', 'တဲ့', 'ဒေါ်', 'နယ်', 'ထ', 'ရ', 'မ့်']

Word Tokenizer

from myTokenize import WordTokenizer

tokenizer = WordTokenizer(engine="CRF")  # Use "myWord", "CRF", or "LSTM"
words = tokenizer.tokenize("မြန်မာနိုင်ငံ။")
print(words)  # ['မြန်မာ', 'နိုင်ငံ', '။']

Phrase Tokenizer

from myTokenize import PhraseTokenizer

tokenizer = PhraseTokenizer()
phrases = tokenizer.tokenize("ညာဘက်ကိုယူပြီးတော့တည့်တည့်သွားပါ")
print(phrases)  # ['ညာဘက်_ကို', 'ယူ', 'ပြီး_တော့', 'တည့်တည့်', 'သွား_ပါ']

Sentence Tokenizer

from myTokenize import SentenceTokenizer

tokenizer = SentenceTokenizer()
sentences = tokenizer.tokenize("ညာဘက်ကိုယူပြီးတော့တည့်တည့်သွားပါခင်ဗျားငါးမိနစ်လောက်ကြာလိမ့်မယ်")
print(sentences)  # [['ညာ', 'ဘက်', 'ကို', 'ယူ', 'ပြီး', 'တော့', 'တည့်တည့်', 'သွား', 'ပါ'], ['ခင်ဗျား', 'ငါး', 'မိနစ်', 'လောက်', 'ကြာ', 'လိမ့်', 'မယ်']]

Folder Structure

./myTokenize/
├── CRFTokenizer
│   └── wordseg_c2_crf.crfsuite
├── SentencePiece
│   ├── bpe_sentencepiece_model.model
│   ├── bpe_sentencepiece_model.vocab
│   ├── unigram_sentencepiece_model.model
│   └── unigram_sentencepiece_model.vocab
├── Tokenizer.py
└── myWord
    ├── phrase_segment.py
    └── word_segment.py

Dependencies

  • Python 3.7+
  • TensorFlow
  • SentencePiece
  • pycrfsuite
  • Numpy

License

This project is licensed under the MIT License. See the LICENSE file for details.

Authors

  • Ye Kyaw Thu
  • Thura Aung

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mytokenize-0.1.0.tar.gz (1.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

myTokenize-0.1.0-py3-none-any.whl (1.8 MB view details)

Uploaded Python 3

File details

Details for the file mytokenize-0.1.0.tar.gz.

File metadata

  • Download URL: mytokenize-0.1.0.tar.gz
  • Upload date:
  • Size: 1.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.8.20

File hashes

Hashes for mytokenize-0.1.0.tar.gz
Algorithm Hash digest
SHA256 5f95e9cc5f5a5cb3b0dcb93695d43dc59a1452c461ca1fc82c148a8c6fd3f2a9
MD5 f1df3374fc1af3da1e04d0eac708686f
BLAKE2b-256 9abcc37f752c0700503929eba75d152a781f38df9b9d7319aa4b9c4040970568

See more details on using hashes here.

File details

Details for the file myTokenize-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: myTokenize-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 1.8 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.8.20

File hashes

Hashes for myTokenize-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 986ee09c641060a15ae5c8b62f944f54e483ea3281e889fb1b8a188322564917
MD5 676ed84d858d5c68ad17cd411ee2d9df
BLAKE2b-256 e296e6dcc90afaf7a75650c4910c653dc4be08e7705b11035515d5adef9728bf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page