Morphology-aware BPE tokenizer for Philippine languages (Tagalog)
Project description
Filipino Tokenizer
A morphology-aware BPE tokenizer for Philippine languages.
Existing subword tokenizers (SentencePiece, HuggingFace BPE) treat Filipino text as raw character sequences. They have no knowledge of Filipino morphology, so they routinely split words at linguistically meaningless points. A word like pinakamahusay ("the best") gets fragmented into arbitrary substrings instead of its actual morphemes: pinaka- + ma- + husay.
This project fixes that. It combines a rule-based morphological segmenter with a constrained BPE algorithm that never merges across morpheme boundaries. The result is a tokenizer that produces fewer, more meaningful tokens for Filipino text.
Before and After
Consider the sentence: Kumain siya ng masarap na pagkain.
A generic BPE tokenizer might produce:
["Ku", "main", " siya", " ng", " mas", "ar", "ap", " na", " pag", "ka", "in", "."]
This tokenizer understands that kumain contains the infix -um- and root kain, and that pagkain is prefix pag- plus the same root kain:
["k", "um", "ain", " ", "siya", " ", "ng", " ", "ma", "sarap", " ", "na", " ", "pag", "kain", "."]
The root kain is preserved as a single token and shared across both words. This gives downstream models a head start on understanding Filipino word formation.
Installation
The core library requires no external dependencies (like HuggingFace or SentencePiece) and runs purely on the standard library.
pip install filipino-tokenizer
To install from source for development:
git clone https://github.com/JpCurada/filipino-tokenizer.git
cd filipino-tokenizer
pip install -e .[dev]
Quick Start
import os, tempfile
from filipino_tokenizer.tagalog import TagalogTokenizer
# Write a small training corpus
corpus_text = """
Kumain siya ng pagkain sa hapagkainan.
Maganda ang panahon ngayon kaya lumabas kami.
Nagluluto ang nanay ng masarap na adobo para sa pamilya.
"""
tmpdir = tempfile.mkdtemp()
corpus_path = os.path.join(tmpdir, "corpus.txt")
with open(corpus_path, "w", encoding="utf-8") as f:
f.write(corpus_text)
# Train
tok = TagalogTokenizer()
tok.train(corpus_path, vocab_size=500)
# Encode and decode
ids = tok.encode("Kumain siya ng pagkain.")
text = tok.decode(ids)
print(text) # kumain siya ng pagkain.
# Inspect subword tokens
tokens = tok.tokenize("Kumain siya ng pagkain.")
print(tokens) # ['k', 'um', 'ain', ' ', 'siya', ' ', 'ng', ' ', 'pag', 'kain', '.']
# Save and reload
tok.save("my_tokenizer/")
tok2 = TagalogTokenizer()
tok2.load("my_tokenizer/")
How It Works
The tokenizer is a three-stage pipeline.
Stage 1: Affix Tables. Four JSON files in data/ define every known Filipino prefix, suffix, infix, and circumfix. Each entry is tagged by language (Tagalog, Cebuano, etc.), so the same data files support multiple Philippine languages. Prefixes are sorted longest-first for greedy matching.
Stage 2: Morphological Segmenter. The TagalogSegmenter decomposes a word into its constituent morphemes using a multi-pass algorithm:
- Check for frozen/lexicalized forms (e.g., pangalan is a word, not pang- + alan).
- Try circumfix detection (prefix + suffix pairs like ka- -han).
- Strip prefixes, longest match first, with recursion for stacked prefixes.
- Detect infixes (-um- and -in- after the first consonant).
- Strip suffixes, applying phonological rules (-an becomes -han after vowels).
- Validate every candidate root against a dictionary of 30,000+ Tagalog roots.
If no valid segmentation is found, the word is returned whole.
Stage 3: Constrained BPE. The MorphAwareBPE class runs an optimized, incremental byte-pair encoding algorithm (using doubly-linked lists and max-heaps) with one critical constraint: it never merges a pair of symbols that would cross a morpheme boundary marker. This means learned subword units always stay within a single morpheme. The approach follows the Constrained BPE (CBPE) method described by Tacorda et al.
Evaluation
We evaluated our TagalogTokenizer against standard industry tokenizers (GPT-4's cl100k_base and SentencePiece Unigram) on a 5,000-line corpus evaluation split.
=======================================================================
Metric | Ours | GPT-4 | SPM
-----------------------------------------------------------------------
Total Tokens | 645 | 516 | 318
Tokens per Word (Fertility) | 2.34 | 1.87 | 1.15
Morpheme F1 Accuracy | 64.5% | 20.8% | 12.0%
=======================================================================
- Morpheme F1 Accuracy: Our tokenizer is 3x more likely to split Filipino words at actual linguistic boundaries than GPT-4, and 5x more likely than SentencePiece.
- Fertility: Our tokenizer produces slightly more tokens per word (2.34). This is the expected trade-off: because we strictly prevent merges across morpheme boundaries, frequent but morphologically distinct parts (like
pagandkain) are kept separate, rather than being memorized as a single unbroken token (pagkain). This ensures robust compositional understanding for AI models.
Project Structure
filipino-tokenizer/
filipino_tokenizer/
base.py # BaseAffixes, BaseRoots, BaseSegmenter, BaseTokenizer
data/
prefix_table.json # Prefix definitions, multi-language
suffix_table.json # Suffix definitions
infix_table.json # Infix definitions
circumfix_table.json # Circumfix definitions
tagalog_roots.json # ~30k Tagalog root words
bisaya_roots.json # Bisaya root words
tagalog/
__init__.py # Package exports
affixes.py # TagalogAffixes (filters for language="Tagalog")
roots.py # TagalogRoots (loads tagalog_roots.json)
phonology.py # Nasal assimilation, suffix h-insertion
segmenter.py # TagalogSegmenter (multi-pass morpheme decomposition)
bpe.py # MorphAwareBPE (constrained BPE, no cross-boundary merges)
tokenizer.py # TagalogTokenizer (segmenter + BPE pipeline)
tests/
test_affixes.py # Affix loading and filtering tests
test_segmenter.py # Morphological segmentation tests
test_tokenizer.py # Full pipeline tests (round-trip, consistency, efficiency)
examples/
training_tagalog_tokenizer.py # End-to-end training example
demo/
demo_tagalog_tokenizer.ipynb # Jupyter notebook demo
Running Tests
# All tests
python -m unittest discover tests -v
# Individual test files
python -m unittest tests.test_affixes -v
python -m unittest tests.test_segmenter -v
python -m unittest tests.test_tokenizer -v
Adding a New Language
The architecture is designed to support multiple Philippine languages from the same data files. To add Bisaya, Ilokano, or another language:
- Add entries to the JSON affix tables in
filipino_tokenizer/data/with the appropriatelanguagefield. - Add a root word list (e.g.,
filipino_tokenizer/data/bisaya_roots.json). - Create
filipino_tokenizer/<language>/affixes.pysubclassingBaseAffixeswithsuper().__init__(language="<Language>"). - Create a roots class subclassing
BaseRoots. - Implement a segmenter subclassing
BaseSegmenterwith language-specific phonological rules. - Create a tokenizer class that wires the segmenter to
MorphAwareBPE.
References
-
Tacorda, A. J., Ignacio, M. J., Oco, N., & Roxas, R. E. (2017). Controlling byte pair encoding for neural machine translation. 2017 International Conference on Asian Language Processing (IALP), 168-171. The core idea behind the boundary-constrained (Controlled) BPE approach used here.
-
Cruz, J. C. B., & Cheng, C. (2022). Improving Large-scale Language Models and Resources for Filipino. Proceedings of the Thirteenth Language Resources and Evaluation Conference (LREC). Authors of key Filipino NLP datasets and benchmarks, including the TLUnified corpus.
-
Miranda, L. J. (2023). calamanCy: A Tagalog Natural Language Processing Toolkit. Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS). SpaCy-based NLP pipeline for Tagalog that informed the morphological analysis approach.
License
MIT License. See LICENSE for details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file filipino_tokenizer-0.2.0.tar.gz.
File metadata
- Download URL: filipino_tokenizer-0.2.0.tar.gz
- Upload date:
- Size: 2.7 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
daf5eba7189757a128aa4054aa234abb970f71c8fb58ca3ac4ca57ce51f7fb9f
|
|
| MD5 |
00cf00ae2a4deaec9a96a08f6b3e1a51
|
|
| BLAKE2b-256 |
e4299d77c2899654d66264ffd50c97182cefb4ae564f16255aa78d9917ffeb7f
|
Provenance
The following attestation bundles were made for filipino_tokenizer-0.2.0.tar.gz:
Publisher:
publish.yml on JpCurada/filipino-tokenizer
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
filipino_tokenizer-0.2.0.tar.gz -
Subject digest:
daf5eba7189757a128aa4054aa234abb970f71c8fb58ca3ac4ca57ce51f7fb9f - Sigstore transparency entry: 1390537004
- Sigstore integration time:
-
Permalink:
JpCurada/filipino-tokenizer@e0263d42c29c33944562851d7093a1912d900f0f -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/JpCurada
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@e0263d42c29c33944562851d7093a1912d900f0f -
Trigger Event:
push
-
Statement type:
File details
Details for the file filipino_tokenizer-0.2.0-py3-none-any.whl.
File metadata
- Download URL: filipino_tokenizer-0.2.0-py3-none-any.whl
- Upload date:
- Size: 2.7 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b991402cd442d0e53c5563ced3b591e8cac63da26e98cc1dfc153fc528b2bd70
|
|
| MD5 |
1b93882daf8527ddfe187f3270962438
|
|
| BLAKE2b-256 |
46eae534f35579a092f2f0ca87db5129defdd1c96d700113730f4355b417880f
|
Provenance
The following attestation bundles were made for filipino_tokenizer-0.2.0-py3-none-any.whl:
Publisher:
publish.yml on JpCurada/filipino-tokenizer
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
filipino_tokenizer-0.2.0-py3-none-any.whl -
Subject digest:
b991402cd442d0e53c5563ced3b591e8cac63da26e98cc1dfc153fc528b2bd70 - Sigstore transparency entry: 1390537054
- Sigstore integration time:
-
Permalink:
JpCurada/filipino-tokenizer@e0263d42c29c33944562851d7093a1912d900f0f -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/JpCurada
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@e0263d42c29c33944562851d7093a1912d900f0f -
Trigger Event:
push
-
Statement type: