Tibetan Word Tokenizer
Project description
Botok – Python Tibetan Tokenizer
Description • Key Features • Installation • Basic Usage • Advanced Usage • Documentation • Development • Contributing • Acknowledgements
Description
Botok is a powerful Python library for tokenizing Tibetan text. It segments text into words with high accuracy and provides optional attributes such as lemma, part-of-speech (POS) tags, and clean forms. The library supports various text formats, custom dialects, and multiple tokenization modes, making it a versatile tool for Tibetan Natural Language Processing (NLP).
Key Features
- Word Segmentation: Accurate word segmentation with support for affixed particles
- Multiple Tokenization Modes:
- Word tokenization
- Chunk tokenization (groups of meaningful characters)
- Space-based tokenization
- Rich Token Attributes:
- Lemmatization
- POS tagging
- Clean form generation
- Custom Dialect Support: Use pre-configured dialects or create your own
- File Processing: Process both strings and files with automatic output generation
- Robust Handling: Manages complex cases like double tseks and spaces within words
Installation
Requirements
- Python 3.6 or higher
- pip package manager
Basic Installation
pip install botok
Development Installation
git clone https://github.com/OpenPecha/botok.git
cd botok
pip install -e .
Basic Usage
Simple Word Tokenization
from botok import WordTokenizer
from botok.config import Config
from pathlib import Path
# Initialize tokenizer with default configuration
config = Config(dialect_name="general", base_path=Path.home())
wt = WordTokenizer(config=config)
# Tokenize text
text = "བཀྲ་ཤིས་བདེ་ལེགས་ཞུས་རྒྱུ་ཡིན་ སེམས་པ་སྐྱིད་པོ་འདུག།"
tokens = wt.tokenize(text, split_affixes=False)
# Print each token
for token in tokens:
print(token)
File Processing
from botok import Text
from pathlib import Path
# Process a file
input_file = Path("input.txt")
t = Text(input_file)
t.tokenize_chunks_plaintext # Creates input_pybo.txt with tokenized output
Advanced Usage
Custom Dialect Configuration
from botok import WordTokenizer
from botok.config import Config
from pathlib import Path
# Configure custom dialect
config = Config(
dialect_name="custom",
base_path=Path.home() / "my_dialects"
)
# Initialize tokenizer with custom config
wt = WordTokenizer(config=config)
# Process text with custom settings
text = "བཀྲ་ཤིས་བདེ་ལེགས།"
tokens = wt.tokenize(
text,
split_affixes=True,
pos_tagging=True,
lemmatize=True
)
Different Tokenization Modes
from botok import Text
text = """ལེ གས། བཀྲ་ཤིས་མཐའི་ ༆ ཤི་བཀྲ་ཤིས་"""
t = Text(text)
# 1. Word tokenization
words = t.tokenize_words_raw_text
# 2. Chunk tokenization (groups of meaningful characters)
chunks = t.tokenize_chunks_plaintext
# 3. Space-based tokenization
spaces = t.tokenize_on_spaces
Documentation
For comprehensive documentation, visit:
- ReadTheDocs - Full API documentation
- Wiki - Guides and tutorials
- Examples - Code examples
Development
Building from Source
rm -rf dist/
python setup.py clean sdist
Publishing to PyPI
Automated Publishing with Semantic Versioning
The repository is configured with GitHub Actions to automatically handle version bumping and publishing to PyPI when changes are pushed to the master branch. The workflow uses semantic versioning based on commit messages:
-
Use the following commit message formats:
fix: your message
- For bug fixes (triggers PATCH version bump)feat: your message
- For new features (triggers MINOR version bump)- Add
BREAKING CHANGE: description
in the commit body for breaking changes (triggers MAJOR version bump)
Examples:
# This will trigger a PATCH version bump (e.g., 0.8.12 → 0.8.13) fix: improve test coverage to 90% and fix Python 3.12 compatibility # This will trigger a MINOR version bump (e.g., 0.8.12 → 0.9.0) feat: add new sentence tokenization mode for complex Tibetan sentences # This will trigger a MAJOR version bump (e.g., 0.8.12 → 1.0.0) feat: refactor token attributes structure BREAKING CHANGE: Token.attributes now uses a dictionary format instead of properties, requiring changes to code that accesses token attributes directly
-
When you push to the master branch, the CI workflow will:
- Run all tests across multiple Python versions
- Analyze commit messages to determine the next version number
- Update version numbers in the code
- Create a new release on GitHub
- Publish the package to PyPI
Manual Publishing
For manual publishing (if needed):
twine upload dist/*
Running Tests
pytest tests/
Contributing
We welcome contributions! Here's how you can help:
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Please ensure your PR adheres to:
- Code style guidelines
- Test coverage requirements
- Documentation standards
Project Owners
Acknowledgements
botok is an open source library for Tibetan NLP. We are grateful to our sponsors and contributors:
Sponsors
- Khyentse Foundation - USD22,000 initial funding
- Barom/Esukhia canon project - Training data curation
- BDRC - Staff contribution for data curation
Contributors
- Drupchen - Core development
- Élie Roux - Architecture and development
- Ngawang Trinley - Project management
- Mikko Kotila - Development
- Thubten Rinzin - Testing and documentation
- Tenzin - Development
- Joyce Mackzenzie - Logo design
License
Copyright (C) 2019-2025 OpenPecha. Licensed under Apache 2.0.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file botok-0.9.0.tar.gz
.
File metadata
- Download URL: botok-0.9.0.tar.gz
- Upload date:
- Size: 68.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.8.0 pkginfo/1.12.1.2 readme-renderer/44.0 requests/2.32.3 requests-toolbelt/1.0.0 urllib3/2.3.0 tqdm/4.67.1 importlib-metadata/8.6.1 keyring/25.6.0 rfc3986/2.0.0 colorama/0.4.6 CPython/3.10.16
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 |
afd10d38af6c45c74b0bcb4e428b9a26915d4ca2062becf36429e36b44616ad4
|
|
MD5 |
a66f123bcff7d16f4c5043d84e870027
|
|
BLAKE2b-256 |
3a94debb1619b0129a224d91edb62146589ee3b3de31a570dfa81b320999d116
|
File details
Details for the file botok-0.9.0-py3-none-any.whl
.
File metadata
- Download URL: botok-0.9.0-py3-none-any.whl
- Upload date:
- Size: 79.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.8.0 pkginfo/1.12.1.2 readme-renderer/44.0 requests/2.32.3 requests-toolbelt/1.0.0 urllib3/2.3.0 tqdm/4.67.1 importlib-metadata/8.6.1 keyring/25.6.0 rfc3986/2.0.0 colorama/0.4.6 CPython/3.10.16
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 |
ef23f16abdeda1a4e34289cafd08da21d0e4d0b1cdd41571e286ad059021196e
|
|
MD5 |
4b77ebcb0c8b81546e99dfe9f95e0781
|
|
BLAKE2b-256 |
af55110acad86e1faa0cc7ed83806599a5a4f75de05250f34f3801891285c02d
|