Skip to main content

BPE & Unigram trainers for Shredword tokenizer

Project description

ShredWord

ShredWord is a byte-pair encoding (BPE) based tokenizer-trainer designed for fast, efficient, and flexible text processing & vocab training. It offers training, and text normalization functionalities and is backed by a C/C++ core with a Python interface for easy integration into machine learning workflows.

Features

  1. Efficient Tokenization: Utilizes BPE for compressing text data and reducing the vocabulary size, making it well-suited for NLP tasks.
  2. Customizable Vocabulary: Allows users to define the target vocabulary size during training.
  3. Save and Load Models: Supports saving and loading trained tokenizers for reuse.
  4. Python Integration: Provides a Python interface for seamless integration and usability.

How It Works

Byte-Pair Encoding (BPE)

BPE is a subword tokenization algorithm that compresses a dataset by merging the most frequent pairs of characters or subwords into new tokens. This process continues until a predefined vocabulary size is reached.

Key steps:

  1. Initialize the vocabulary with all unique characters in the dataset.
  2. Count the frequency of character pairs.
  3. Merge the most frequent pair into a new token.
  4. Repeat until the target vocabulary size is achieved.

ShredWord implements this process efficiently in C/C++, exposing training, encoding, and decoding methods through Python.

Installation

Prerequisites

  • Python 3.11+
  • GCC or a compatible compiler (for compiling the C/C++ code)

Steps

  1. Install the Python package from PyPI.org:

    pip install shredword-trainer
    

Usage

Below is a simple example demonstrating how to use ShredWord for training, encoding, and decoding text.

Example

from shredword.trainer import BPETrainer

trainer = BPETrainer(target_vocab_size=500, min_pair_freq=1000)
trainer.load_corpus("test data/final.txt")
trainer.train()
trainer.save("model/merges_1k.model", "model/vocab_1k.vocab")

API Overview

Core Methods

  • train(text, vocab_size): Train a tokenizer on the input text to a specified vocabulary size.
  • save(file_path): Save the trained tokenizer to a file.

Properties

  • merges: View or set the merge rules for tokenization.
  • vocab: Access the vocabulary as a dictionary of token IDs to strings.
  • pattern: View or set the regular expression pattern used for token splitting.
  • special_tokens: View or set special tokens used by the tokenizer.

Advanced Features

Saving and Loading

Trained tokenizers can be saved to a file and reloaded for use in future tasks. The saved model includes merge rules and any special tokens or patterns defined during training.

# Save the trained model
tokenizer.save("vocab/trained_vocab.model")

# Load the model
tokenizer.load("vocab/trained_vocab.model")

Customization

Users can define special tokens or modify the merge rules and pattern directly using the provided properties.

# Set special tokens
special_tokens = [("<PAD>", 0), ("<UNK>", 1)]
tokenizer.special_tokens = special_tokens

# Update merge rules
merges = [(101, 32, 256), (32, 116, 257)]
tokenizer.merges = merges

Contributing

We welcome contributions! Feel free to open an issue or submit a pull request if you have ideas for improvement.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Acknowledgments

ShredWord was inspired by the need for efficient and flexible tokenization in modern NLP pipelines. Special thanks to contributors and the open-source community for their support.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

shredword_trainer-0.0.1.tar.gz (30.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

shredword_trainer-0.0.1-cp313-cp313-win_amd64.whl (37.7 kB view details)

Uploaded CPython 3.13Windows x86-64

File details

Details for the file shredword_trainer-0.0.1.tar.gz.

File metadata

  • Download URL: shredword_trainer-0.0.1.tar.gz
  • Upload date:
  • Size: 30.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.3

File hashes

Hashes for shredword_trainer-0.0.1.tar.gz
Algorithm Hash digest
SHA256 c1c949b7a2a67ce5b4c006d73ac16bee5217b735ee8c48800080c1b77e7e92ba
MD5 3e4e09dcc99e696e838604d2bbe9f30b
BLAKE2b-256 4e006d7f862a0e62f74049017f11141090acead66e8a4a4f799ca5e39ac76414

See more details on using hashes here.

File details

Details for the file shredword_trainer-0.0.1-cp313-cp313-win_amd64.whl.

File metadata

File hashes

Hashes for shredword_trainer-0.0.1-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 a1f50a78a3465b64d3f438a3448d00792a6de39f9cd9b7bcc507def1d6356393
MD5 abb6f018515160004c871371cdaae43e
BLAKE2b-256 9cf9a17b0c7e2e9b425f80909611f4540d8503693a9105bdfe4b96b1609214df

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page