Myanmar (Burmese) text intelligence library — spell checking, grammar validation, dictionary building, and AI model training
Project description
mySpellChecker: Myanmar (Burmese) Text Intelligence Library
Myanmar (Burmese) text intelligence library — 12-strategy checking pipeline, dictionary building, and AI model training, from O(1) SymSpell lookups to ONNX-powered inference.
Overview
mySpellChecker is a comprehensive text intelligence library built specifically for the Myanmar language. It covers three domains: a 12-strategy checking pipeline (from rule-based validation through grammar checking, N-gram context, confusable detection, homophone detection, to ONNX-powered AI inference), a dictionary building pipeline (corpus ingestion, segmentation, N-gram frequency, SQLite packaging), and AI model training (semantic MLM fine-tuning with ONNX export). Since Myanmar script is written as a continuous stream without spaces between words, the library uses a multi-layer validation approach — starting with fast syllable-level checks and progressively applying deeper analysis including POS tagging, 8 grammar checkers, and context-aware semantic validation.
Key Features
Note: v1.0 supports Standard Burmese (Myanmar) only. Other Myanmar-script languages (Shan, Karen, Mon, etc.) and extended Unicode ranges are planned for future releases.
Checking Pipeline
- 12-Strategy Validation Pipeline: Composable strategies from fast rule checks (sub-10ms) to AI inference, each layer building on the previous.
- Syllable-First Architecture: Validates most errors at the syllable level before assembling into words for deeper analysis.
- SymSpell Algorithm: Custom O(1) symmetric delete implementation with Myanmar-specific variant generation for fast correction suggestions.
- N-gram Context Checking: Bigram/Trigram probabilities detect real-word errors (correct spelling, wrong context).
- Homophone Detection: Bidirectional N-gram analysis catches sound-alike word errors with frequency-aware guards.
- Confusable Detection: Multi-layer valid-word confusion detection — statistical bigram, MLP classifier, and MLM semantic analysis.
- Grammar Checking: 8 specialized checkers — Aspect, Classifier, Compound, MergedWord, Negation, Particle, TenseAgreement, Register.
- POS Tagging: Pluggable backends — Rule-Based (fast), Viterbi HMM (balanced), Transformer (93% accuracy).
- Joint Segmentation: Simultaneous word segmentation and POS tagging in a single pass.
- Compound & Morpheme Handling: DP-based compound resolution, productive reduplication validation, and morpheme-level correction for OOV words.
- AI Semantic Checking (Optional): ONNX masked language model for context-aware validation.
- Named Entity Recognition: Heuristic and Transformer-based NER to reduce false positives on names and places.
Dictionary Building Pipeline
- Multi-Format Corpus Ingestion: Build dictionaries from
.txt,.csv,.tsv,.json,.jsonl,.parquetfiles. - Incremental Builds: Resume corpus processing without reprocessing completed files.
- Pluggable Storage: SQLite (default, disk-based) or MemoryProvider (RAM-based) with thread-safe connection pooling.
AI Model Training
- Semantic Model Training: Train masked language models with word-boundary BPE, whole-word masking, and denoising objectives.
- ONNX Export & Quantization: Convert trained models to ONNX with quantization for production deployment.
Myanmar Language Support
- Text Normalization: Unified service — zero-width character removal, NFC/NFD normalization, Zawgyi conversion.
- Zawgyi Detection: Built-in detection and warning for legacy Zawgyi encoded text.
- Phonetic & Colloquial Handling: Phonetic hashing, colloquial variant detection (e.g., ကျနော် → ကျွန်တော်), configurable strictness.
- Tone Processing: Tone mark validation, disambiguation, and context-based correction.
- Bilingual Error Messages: Error reporting in English and Myanmar (Burmese).
Performance & Production
- Cython/C++ Extensions: 11 performance-critical paths compiled to C++ with OpenMP parallelization.
- Streaming & Batch APIs: Process large documents with streaming, batch (
check_batch), and async (check_async) APIs. - Configurable: Pre-defined profiles (production, fast, accurate, development, testing), environment/file-based config loading, and DI container for advanced wiring.
Documentation
Full documentation is available at docs.myspellchecker.com.
Getting Started
- Introduction: Overview of the library and its architecture.
- Installation: Installation options and system requirements.
- Quick Start: Get up and running in 5 minutes.
- Configuration Guide: All configuration options and profiles.
Text Checking
- Overview: 12-strategy text checking pipeline.
- Syllable Validation: Core validation layer.
- Word Validation: Dictionary + SymSpell suggestions.
- Context Checking: N-gram probability analysis.
- Confusable Detection: Multi-layer confusable word detection.
- Homophone Detection: Sound-alike error detection.
Grammar & NER
- Grammar Checking: Syntactic validation.
- Grammar Checkers: 8 specialized checkers.
- Grammar Engine: Rule engine internals.
- Named Entity Recognition: NER with 3 implementations + gazetteer.
- Loan Word Variants: Transliteration variant handling for English, Pali/Sanskrit loan words.
Language Processing
- POS Tagging: Pluggable tagging (Rule-Based, Viterbi, Transformer).
- Morphology Analysis: Word structure analysis.
- Compound Resolution: Compound word and reduplication validation.
- Segmenters: Word segmentation engines.
AI-Powered Checking
- Semantic Checking: AI-powered MLM validation.
- Validation Strategies: 12 composable strategies.
- Training Models: Train custom semantic models.
Text Utilities
- Text Normalization: Unified normalization service.
- Text Utilities: Stemmer, Phonetic, Tone, Zawgyi.
- Text Validation: Input text validation.
Performance & Scale
- Streaming: Large document processing.
- Batch Processing: High-throughput parallel processing.
- Async API: Non-blocking spell check operations.
- Performance Tuning: Optimization strategies.
- Connection Pooling: Database connection management.
Customization
- Customization Guide: Extending and customizing behavior.
- Custom Dictionaries: Build and customize dictionaries.
- Custom Grammar Rules: Write YAML grammar rules.
- Caching: Algorithm and result caching.
- Resource Caching: Model and resource caching.
- Logging: Centralized logging configuration.
Integration & Deployment
- Integration Guide: Integrate with web apps and APIs.
- Docker: Container deployment guide.
- Zawgyi Support: Legacy encoding handling.
Dictionary Building
- Pipeline Overview: Dictionary building pipeline.
- Corpus Format: Supported input formats.
- Ingestion: Corpus ingestion details.
- Building Dictionaries: Step-by-step build guide.
- Optimization: Performance tuning for large corpora.
API & CLI Reference
- API Reference: Full API documentation.
- SpellChecker API: Main SpellChecker class reference.
- Configuration API: Configuration class reference.
- Provider Capabilities: Dictionary provider interface.
- Tokenizers: Tokenizer API reference.
- CLI Reference: Command-line interface guide.
Core Internals
- Core Overview: Core package internals.
- Syllable Validation: Syllable validator internals.
- Word Validation: Word validator internals.
- Training Internals: ML training pipeline internals.
- Algorithm Factory: Algorithm instantiation patterns.
- I/O Utilities: File I/O utilities reference.
Algorithms
- Algorithms Overview: Algorithm catalog.
- SymSpell: O(1) suggestion algorithm.
- Edit Distance: Myanmar-aware Levenshtein distance.
- Suggestion Ranking: Multi-signal ranking pipeline.
- Neural Reranker: ONNX-based MLP/GBT suggestion reranker.
- Suggestion Strategy: Strategy pattern for suggestions.
- Morpheme Suggestions: Morpheme-level and medial swap corrections.
- N-gram Context: Bigram/Trigram probability models.
- Context-Aware Checking: N-gram and syntactic rules.
- Semantic Algorithm: AI/ML inference internals.
- Grammar Rules Engine: Grammar rule processing.
- Tone Disambiguation: Tone mark resolution.
- NER Algorithm: NER implementation details.
Segmentation & Tagging
- Segmentation Overview: Segmentation algorithm catalog.
- Syllable Segmentation: Syllable-level segmentation.
- Normalization Algorithm: Text normalization internals.
- Phonetic Algorithm: Phonetic hashing and similarity.
- Viterbi POS Tagger: HMM-based POS tagging.
- POS Disambiguator: POS disambiguation logic.
- Joint Segmentation: Combined segmentation + tagging.
Architecture
- Architecture Overview: Multi-layer validation pipeline.
- System Design: Component architecture.
- Validation Pipeline: Pipeline execution flow.
- Component Diagram: Visual component map.
- Data Flow: Data flow through the system.
- Dependency Injection: Component management system.
- Extension Points: How to extend the library.
Error & Rules Reference
- Reference Overview: Technical reference index.
- Error Types: Error classification reference.
- Error Codes: Complete error code listing.
- Rules System: YAML configuration files.
Data Reference
- Constants: Myanmar Unicode constants and character sets.
- Glossary: Terms and definitions.
- Phonetic Data: Phonetic groups and similarity mappings.
Data Pipeline Internals
- Pipeline Core: Data pipeline core module.
- Database Schema: SQLite schema reference.
- Schema Management: Schema versioning and migrations.
- Providers: Data source providers.
- Processing: Text processing stages.
- POS Inference: POS tagging during build.
- Segmentation Repair: Segmentation error correction.
- Pipeline Reporter: Build progress reporting.
Help & FAQ
- FAQ: Frequently asked questions.
- Troubleshooting: Common issues and solutions.
- Comparisons: How mySpellChecker compares to other tools.
Development
- Development Guide: Development overview.
- Setup: Development environment setup.
- Contributing: Contribution guidelines.
- Naming Conventions: Code naming standards.
- Testing: Test suite and coverage.
- Benchmarks: Benchmark suite and scoring methodology.
- Cython Dev Guide: Working with Cython extensions.
- Cython Reference: Cython patterns and optimization.
- CLI Formatting: CLI output formatting internals.
Quick Start
1. Installation
Prerequisites:
- Python 3.10+
- C++ Compiler (GCC/Clang/MSVC) for building Cython extensions.
Standard (Recommended):
pip install myspellchecker
With Transformer POS Tagging (Optional):
# Enables transformer-based POS tagging for 93% accuracy
pip install "myspellchecker[transformers]"
Full (with all features):
pip install "myspellchecker[ai,build,train,transformers]"
2. Build Dictionary
The library requires a dictionary database. You can build a sample one or use your own corpus.
# Install build dependencies (pyarrow, duckdb, etc.)
pip install "myspellchecker[build]"
# Build a sample database for testing
myspellchecker build --sample
# Build from your own text corpus
myspellchecker build --input corpus.txt --output mySpellChecker.db
3. Usage
Python:
from myspellchecker.core import SpellCheckerBuilder, ConfigPresets, ValidationLevel
# 1. Initialize with Builder (Recommended)
checker = (
SpellCheckerBuilder()
.with_config(ConfigPresets.DEFAULT)
.with_phonetic(True)
.build()
)
# 2. Simple Syllable Check (Fastest)
text = "မြနမ်ာနိုင်ငံ"
result = checker.check(text)
print(f"Corrected: {result.corrected_text}")
# Output: မြန်မာနိုင်ငံ
# 3. Context-Aware Check (Slower, more accurate)
# Detects that 'နီ' (Red) is wrong in this context, suggests 'နေ' (Stay/Ing)
text = "မင်းဘာလုပ်နီလဲ"
result = checker.check(text, level=ValidationLevel.WORD)
print(f"Corrected: {result.corrected_text}")
# Output: မင်းဘာလုပ်နေလဲ
CLI:
See the CLI Reference for full details.
# Check a string
echo "မင်္ဂလာပါ" | myspellchecker
# Check a file with rich output
myspellchecker check input.txt --format rich
# Segment text with POS tags
echo "မြန်မာနိုင်ငံ" | myspellchecker segment --tag
4. Configuration
from myspellchecker import SpellChecker
from myspellchecker.core.config import SpellCheckerConfig
from myspellchecker.providers.sqlite import SQLiteProvider
# Configure with custom settings
config = SpellCheckerConfig(
max_edit_distance=2,
max_suggestions=5,
use_context_checker=True,
use_phonetic=True,
use_ner=True
)
checker = SpellChecker(
config=config,
provider=SQLiteProvider(database_path="mySpellChecker.db")
)
See the Configuration Guide for all options.
5. Logging
Configure logging globally at the start of your application:
from myspellchecker.utils.logging_utils import configure_logging
# Enable verbose debug logging
configure_logging(level="DEBUG")
# Or use structured JSON logging for production
configure_logging(level="INFO", json_output=True)
See the Logging Guide for details.
Advanced Features
Grammar Checking
Eight specialized grammar checkers for Myanmar:
from myspellchecker.grammar.checkers.register import RegisterChecker
checker = RegisterChecker()
errors = checker.validate_sequence(["သူ", "သည်", "စာအုပ်", "ဖတ်", "တယ်"])
# Detects mixed register (formal "သည်" + colloquial "တယ်")
See the Grammar Checkers Guide for details.
Named Entity Recognition
Reduce false positives by identifying names and places:
from myspellchecker.core.config import SpellCheckerConfig
config = SpellCheckerConfig(use_ner=True)
See the NER Guide for details.
POS Tagging & Joint Segmentation
mySpellChecker supports advanced linguistic analysis:
- Pluggable POS Tagging: Rule-Based (fastest), Viterbi (balanced), or Transformer (most accurate).
- Joint Segmentation: Combine word breaking and tagging in a single pass.
from myspellchecker.core.config import SpellCheckerConfig, JointConfig
config = SpellCheckerConfig(
joint=JointConfig(enabled=True)
)
checker = SpellChecker(config=config)
words, tags = checker.segment_and_tag("မြန်မာနိုင်ငံ")
See the POS Tagging Guide for details.
Validation Strategies
Composable validation pipeline with 12 strategies:
| Strategy | Priority | Purpose |
|---|---|---|
| ToneValidation | 10 | Tone mark disambiguation |
| Orthography | 15 | Medial order and compatibility |
| SyntacticRule | 20 | Grammar rule checking |
| StatisticalConfusable | 24 | Bigram-based confusable detection |
| BrokenCompound | 25 | Broken compound word detection |
| POSSequence | 30 | POS sequence validation |
| Question | 40 | Question structure |
| Homophone | 45 | Sound-alike detection |
| ConfusableCompoundClassifier | 47 | MLP-based confusable/compound detection |
| ConfusableSemantic | 48 | MLM-enhanced confusable detection |
| NgramContext | 50 | N-gram probability |
| Semantic | 70 | AI-powered validation (ONNX) |
See the Validation Strategies Guide for details.
Benchmark Results
Tested on a 1,138-sentence benchmark suite (444 clean, 694 with errors, 564 error spans) covering 3 difficulty tiers and 6 domains. The dictionary database and semantic model are not bundled with the library — users build or provide their own.
Test environment:
- Dictionary: Production SQLite database (565 MB, 601K words, 2.2M bigrams, enrichment tables)
- Semantic model: Custom RoBERTa MLM (6L/768H, ONNX quantized, 71 MB)
- Hardware: Apple Silicon, Python 3.14
- Benchmark:
benchmarks/myspellchecker_benchmark.yaml(1,138 sentences)
With Semantic Model
| Metric | Value |
|---|---|
| F1 Score | 98.3% |
| Precision | 97.1% |
| Recall | 99.6% |
| False Positives | 14 (0% on clean sentences) |
| False Negatives | 2 |
| Top-1 Suggestion Accuracy | 81.2% |
| MRR | 0.8395 |
| Mean Latency | 35.2 ms/sentence |
| P50 Latency | 32.1 ms |
Without Semantic Model
| Metric | Value |
|---|---|
| F1 Score | 96.2% |
| Precision | 97.8% |
| Recall | 94.7% |
| False Positives | 10 (0% on clean sentences) |
| False Negatives | 25 |
| Top-1 Suggestion Accuracy | 85.2% |
| MRR | 0.8731 |
| Mean Latency | ~12 ms/sentence |
| P50 Latency | ~7 ms |
The semantic model adds ~23ms mean latency but boosts recall from 94.7% to 99.6% by catching 23 additional context-dependent errors that rule-based methods miss. Both modes maintain sub-35ms P50 latency suitable for interactive use.
Development
Setup
git clone https://github.com/thettwe/myspellchecker.git
cd myspellchecker
python -m venv venv
source venv/bin/activate
pip install -e ".[dev]"
Testing
The test suite has 4,658 tests across 213 files with 74% code coverage, organized into unit, integration, e2e, and stress tiers with auto-applied pytest markers.
# Run default test suite (~5 min, skips slow tests)
pytest tests/
# Run by category
pytest tests/ -m integration # 307 integration tests
pytest tests/ -m e2e # 10 end-to-end CLI tests
pytest tests/ -m slow # 39 slow tests (property-based, stress, DB builds)
# Run with coverage
pytest tests/ --cov=src/myspellchecker --cov-fail-under=75
# Formatting and linting
ruff format .
ruff check .
mypy src/myspellchecker
See the Development Guide for contributing guidelines and the Testing Guide for test suite details.
Acknowledgments
mySpellChecker integrates tools and research from the Myanmar NLP community:
Models & Resources
| Resource | Author | Description | Link |
|---|---|---|---|
| Myanmar POS Model | Chuu Htet Naing | XLM-RoBERTa-based POS tagger (93.37% accuracy) | HuggingFace |
| Myanmar NER Model | Chuu Htet Naing | Transformer-based named entity recognition | HuggingFace |
| Myanmar Text Segmentation Model | Chuu Htet Naing | Transformer-based word segmenter | HuggingFace |
| myWord Segmentation | Ye Kyaw Thu | Viterbi-based Myanmar word segmentation | GitHub |
| myPOS | Ye Kyaw Thu | POS corpus used for CRF training | GitHub |
| myNER | Ye Kyaw Thu et al. | NER corpus with 7-tag annotation scheme, joint POS training | arXiv |
| myG2P | Ye Kyaw Thu | Myanmar grapheme-to-phoneme conversion dictionary | GitHub |
| CRF Word Segmenter | Ye Kyaw Thu | CRF-based syllable-to-word segmentation model | GitHub |
| myanmartools | Zawgyi detection and conversion | GitHub |
Key Dependencies
| Library | Purpose | License |
|---|---|---|
| pycrfsuite | CRF model inference | MIT |
| transformers | Transformer model inference | Apache 2.0 |
Algorithm References
| Algorithm | Author | Description | Link |
|---|---|---|---|
| SymSpell | Wolf Garbe | Symmetric delete spelling correction algorithm. mySpellChecker includes a custom implementation with Myanmar-specific variant generation. | GitHub |
| SymSpell4Burmese | Hlaing Myat Nwe et al. | Foundational research on adapting SymSpell for Burmese | IEEE |
Citations
If you use mySpellChecker in your research, please cite the relevant works:
@misc{chuuhtetnaing-myanmar-pos,
author = {Chuu Htet Naing},
title = {Myanmar POS Model},
year = {2024},
publisher = {Hugging Face},
url = {https://huggingface.co/chuuhtetnaing/myanmar-pos-model}
}
@misc{yekyawthu-myword,
author = {Ye Kyaw Thu},
title = {myWord: Word Segmentation Tool for Burmese},
year = {2017},
publisher = {GitHub},
url = {https://github.com/ye-kyaw-thu/myWord}
}
@misc{garbe-symspell,
author = {Wolf Garbe},
title = {SymSpell: Symmetric Delete Spelling Correction Algorithm},
year = {2012},
publisher = {GitHub},
url = {https://github.com/wolfgarbe/SymSpell}
}
@inproceedings{symspell4burmese,
title = {SymSpell4Burmese: Symmetric Delete Spelling Correction Algorithm for Burmese},
author = {Hlaing Myat Nwe and others},
year = {2021},
booktitle = {IEEE Conference},
url = {https://ieeexplore.ieee.org/document/9678171/}
}
@misc{yekyawthu-mypos,
author = {Ye Kyaw Thu},
title = {myPOS: POS Corpus for Myanmar Language},
publisher = {GitHub},
url = {https://github.com/ye-kyaw-thu/myPOS}
}
@misc{chuuhtetnaing-myanmar-segmentation,
author = {Chuu Htet Naing},
title = {Myanmar Text Segmentation Model},
publisher = {Hugging Face},
url = {https://huggingface.co/chuuhtetnaing/myanmar-text-segmentation-model}
}
@misc{chuuhtetnaing-myanmar-ner,
author = {Chuu Htet Naing},
title = {Myanmar NER Model},
publisher = {Hugging Face},
url = {https://huggingface.co/chuuhtetnaing/myanmar-ner-model}
}
@inproceedings{myner-2025,
title = {myNER: Contextualized Burmese Named Entity Recognition with Bidirectional LSTM and fastText Embeddings via Joint Training with POS Tagging},
author = {Kaung Lwin Thant and Kwankamol Nongpong and Ye Kyaw Thu and Thura Aung and Khaing Hsu Wai and Thazin Myint Oo},
year = {2025},
booktitle = {4th International Conference on Cybernetics and Innovations (ICCI 2025)},
note = {Best Presentation Award},
url = {https://arxiv.org/abs/2504.04038}
}
@misc{yekyawthu-myg2p,
author = {Ye Kyaw Thu},
title = {myG2P: Myanmar Grapheme to Phoneme Conversion Dictionary},
publisher = {GitHub},
url = {https://github.com/ye-kyaw-thu/myG2P}
}
Thanks to these researchers and developers for making their work publicly available, enabling high-quality Myanmar language processing.
License
This project is licensed under the MIT License.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file myspellchecker-1.3.0.tar.gz.
File metadata
- Download URL: myspellchecker-1.3.0.tar.gz
- Upload date:
- Size: 2.8 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8ca2a0125885d542ffa0c66c978fcb65853e8b17369c9cb6b385898bc99137fa
|
|
| MD5 |
fc0a3b0bfccdd3ea9ae91f85b871769e
|
|
| BLAKE2b-256 |
c0a9a7e8135acb5dd4ac4f93ffa88006cd2baf2bda4854b095678c8eb2a96def
|
Provenance
The following attestation bundles were made for myspellchecker-1.3.0.tar.gz:
Publisher:
publish.yml on thettwe/myspellchecker
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
myspellchecker-1.3.0.tar.gz -
Subject digest:
8ca2a0125885d542ffa0c66c978fcb65853e8b17369c9cb6b385898bc99137fa - Sigstore transparency entry: 1249571854
- Sigstore integration time:
-
Permalink:
thettwe/myspellchecker@5b24ef9f91c766a5075bedd3bbb2ec28c5ad3244 -
Branch / Tag:
refs/tags/v1.3.0 - Owner: https://github.com/thettwe
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@5b24ef9f91c766a5075bedd3bbb2ec28c5ad3244 -
Trigger Event:
push
-
Statement type: