BPE & Unigram trainers for Shredword tokenizer
Project description
ShredWord
ShredWord is a byte-pair encoding (BPE) based tokenizer-trainer designed for fast, efficient, and flexible text processing & vocab training. It offers training, and text normalization functionalities and is backed by a C/C++ core with a Python interface for easy integration into machine learning workflows.
Unigram code doesn't work, I lack intelligence capabilites for fixing it.
Features
- Efficient Tokenization: Utilizes BPE for compressing text data and reducing the vocabulary size, making it well-suited for NLP tasks.
- Customizable Vocabulary: Allows users to define the target vocabulary size during training.
- Save and Load Models: Supports saving and loading trained tokenizers for reuse.
- Python Integration: Provides a Python interface for seamless integration and usability.
How It Works
Byte-Pair Encoding (BPE)
BPE is a subword tokenization algorithm that compresses a dataset by merging the most frequent pairs of characters or subwords into new tokens. This process continues until a predefined vocabulary size is reached.
Key steps:
- Initialize the vocabulary with all unique characters in the dataset.
- Count the frequency of character pairs.
- Merge the most frequent pair into a new token.
- Repeat until the target vocabulary size is achieved.
ShredWord implements this process efficiently in C/C++, exposing training, encoding, and decoding methods through Python.
Installation
Prerequisites
- Python 3.11+
- GCC or a compatible compiler (for compiling the C/C++ code)
Steps
-
Install the Python package from PyPI.org:
pip install shredword-trainer
Usage
Below is a simple example demonstrating how to use ShredWord for training, encoding, and decoding text.
Example
BPE Trainer
from shredword.trainer import BPETrainer
trainer = BPETrainer(target_vocab_size=500, min_pair_freq=1000)
trainer.load_corpus("test data/final.txt")
trainer.train()
trainer.save("model/merges_1k.model", "model/vocab_1k.vocab")
Unigram Trainer
from shredword.trainer import UnigramTrainer
trainer = UnigramTrainer(target_vocab_size=500, min_pair_freq=1000)
trainer.load_corpus("test data/final.txt")
trainer.train()
trainer.save("model/merges_1k.model", "model/vocab_1k.vocab")
API Overview
Core Methods
train(text, vocab_size): Train a tokenizer on the input text to a specified vocabulary size.save(file_path): Save the trained tokenizer to a file.
Properties
merges: View or set the merge rules for tokenization.vocab: Access the vocabulary as a dictionary of token IDs to strings.pattern: View or set the regular expression pattern used for token splitting.special_tokens: View or set special tokens used by the tokenizer.
Advanced Features
Saving and Loading
Trained tokenizers can be saved to a file and reloaded for use in future tasks. The saved model includes merge rules and any special tokens or patterns defined during training.
# Save the trained model
tokenizer.save("vocab/trained_vocab.model")
# Load the model
tokenizer.load("vocab/trained_vocab.model")
Customization
Users can define special tokens or modify the merge rules and pattern directly using the provided properties.
# Set special tokens
special_tokens = [("<PAD>", 0), ("<UNK>", 1)]
tokenizer.special_tokens = special_tokens
# Update merge rules
merges = [(101, 32, 256), (32, 116, 257)]
tokenizer.merges = merges
a project by Shivendra
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file shredword_trainer-0.0.3.tar.gz.
File metadata
- Download URL: shredword_trainer-0.0.3.tar.gz
- Upload date:
- Size: 49.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c28ac863a6f55a0a73f52976e4536538976629edec9b07458180e271acd34c85
|
|
| MD5 |
e8ce073f60903e87a299eebf1381bc2b
|
|
| BLAKE2b-256 |
d22210d54a74433d9f7779e950c0675b36b5a0119717b6f22a0c6ba373be9c0c
|
File details
Details for the file shredword_trainer-0.0.3-cp313-cp313-win_amd64.whl.
File metadata
- Download URL: shredword_trainer-0.0.3-cp313-cp313-win_amd64.whl
- Upload date:
- Size: 62.5 kB
- Tags: CPython 3.13, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ec5de32afbbc8f0d44ab1957e47fb3bb5549bcab5b76fb8c826c02e724cbdd4d
|
|
| MD5 |
cf5ca6e77d2ba2520323b633578bac64
|
|
| BLAKE2b-256 |
d14b60b45e3843f978ac2bd347f1ba6813066742430d992f8ac0e8dc2a2b01e1
|